ERIC Educational Resources Information Center
Chen, Haiwen; Holland, Paul
2009-01-01
In this paper, we develop a new chained equipercentile equating procedure for the nonequivalent groups with anchor test (NEAT) design under the assumptions of the classical test theory model. This new equating is named chained true score equipercentile equating. We also apply the kernel equating framework to this equating design, resulting in a…
ERIC Educational Resources Information Center
He, Yong
2013-01-01
Common test items play an important role in equating multiple test forms under the common-item nonequivalent groups design. Inconsistent item parameter estimates among common items can lead to large bias in equated scores for IRT true score equating. Current methods extensively focus on detection and elimination of outlying common items, which…
Observed Score and True Score Equating Procedures for Multidimensional Item Response Theory
ERIC Educational Resources Information Center
Brossman, Bradley Grant
2010-01-01
The purpose of this research was to develop observed score and true score equating procedures to be used in conjunction with the Multidimensional Item Response Theory (MIRT) framework. Currently, MIRT scale linking procedures exist to place item parameter estimates and ability estimates on the same scale after separate calibrations are conducted.…
Local Linear Observed-Score Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.
2011-01-01
Two methods of local linear observed-score equating for use with anchor-test and single-group designs are introduced. In an empirical study, the two methods were compared with the current traditional linear methods for observed-score equating. As a criterion, the bias in the equated scores relative to true equating based on Lord's (1980)…
Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items
ERIC Educational Resources Information Center
Cher Wong, Cheow
2015-01-01
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
Standard Error Estimation of 3PL IRT True Score Equating with an MCMC Method
ERIC Educational Resources Information Center
Liu, Yuming; Schulz, E. Matthew; Yu, Lei
2008-01-01
A Markov chain Monte Carlo (MCMC) method and a bootstrap method were compared in the estimation of standard errors of item response theory (IRT) true score equating. Three test form relationships were examined: parallel, tau-equivalent, and congeneric. Data were simulated based on Reading Comprehension and Vocabulary tests of the Iowa Tests of…
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert R.; Parker, Pauline A.
2011-01-01
This study investigates the comparability of two item response theory based equating methods: true score equating (TSE), and estimated true equating (ETE). Additionally, six scaling methods were implemented within each equating method: mean-sigma, mean-mean, two versions of fixed common item parameter, Stocking and Lord, and Haebara. Empirical…
Evaluating Equating Accuracy and Assumptions for Groups that Differ in Performance
ERIC Educational Resources Information Center
Powers, Sonya; Kolen, Michael J.
2014-01-01
Accurate equating results are essential when comparing examinee scores across exam forms. Previous research indicates that equating results may not be accurate when group differences are large. This study compared the equating results of frequency estimation, chained equipercentile, item response theory (IRT) true-score, and IRT observed-score…
Assessing Equating Results on Different Equating Criteria
ERIC Educational Resources Information Center
Tong, Ye; Kolen, Michael
2005-01-01
The performance of three equating methods--the presmoothed equipercentile method, the item response theory (IRT) true score method, and the IRT observed score method--were examined based on three equating criteria: the same distributions property, the first-order equity property, and the second-order equity property. The magnitude of the…
IRT Equating of the MCAT. MCAT Monograph.
ERIC Educational Resources Information Center
Hendrickson, Amy B.; Kolen, Michael J.
This study compared various equating models and procedures for a sample of data from the Medical College Admission Test(MCAT), considering how item response theory (IRT) equating results compare with classical equipercentile results and how the results based on use of various IRT models, observed score versus true score, direct versus linked…
ERIC Educational Resources Information Center
Öztürk-Gübes, Nese; Kelecioglu, Hülya
2016-01-01
The purpose of this study was to examine the impact of dimensionality, common-item set format, and different scale linking methods on preserving equity property with mixed-format test equating. Item response theory (IRT) true-score equating (TSE) and IRT observed-score equating (OSE) methods were used under common-item nonequivalent groups design.…
The Effect of Repeaters on Equating
ERIC Educational Resources Information Center
Kim, HeeKyoung; Kolen, Michael J.
2010-01-01
Test equating might be affected by including in the equating analyses examinees who have taken the test previously. This study evaluated the effect of including such repeaters on Medical College Admission Test (MCAT) equating using a population invariance approach. Three-parameter logistic (3-PL) item response theory (IRT) true score and…
ERIC Educational Resources Information Center
Gao, Rui; He, Wei; Ruan, Chunyi
2012-01-01
In this study, we investigated whether preequating results agree with equating results that are based on observed operational data (postequating) for a college placement program. Specifically, we examined the degree to which item response theory (IRT) true score preequating results agreed with those from IRT true score postequating and from…
Preequating with Empirical Item Characteristic Curves: An Observed-Score Preequating Method
ERIC Educational Resources Information Center
Zu, Jiyun; Puhan, Gautam
2014-01-01
Preequating is in demand because it reduces score reporting time. In this article, we evaluated an observed-score preequating method: the empirical item characteristic curve (EICC) method, which makes preequating without item response theory (IRT) possible. EICC preequating results were compared with a criterion equating and with IRT true-score…
Further Study of the Choice of Anchor Tests in Equating
ERIC Educational Resources Information Center
Trierweiler, Tammy J.; Lewis, Charles; Smith, Robert L.
2016-01-01
In this study, we describe what factors influence the observed score correlation between an (external) anchor test and a total test. We show that the anchor to full-test observed score correlation is based on two components: the true score correlation between the anchor and total test, and the reliability of the anchor test. Findings using an…
ERIC Educational Resources Information Center
von Davier, Matthias; González B., Jorge; von Davier, Alina A.
2013-01-01
Local equating (LE) is based on Lord's criterion of equity. It defines a family of true transformations that aim at the ideal of equitable equating. van der Linden (this issue) offers a detailed discussion of common issues in observed-score equating relative to this local approach. By assuming an underlying item response theory model, one of…
ERIC Educational Resources Information Center
McDonald, Roderick P.
2011-01-01
A distinction is proposed between measures and predictors of latent variables. The discussion addresses the consequences of the distinction for the true-score model, the linear factor model, Structural Equation Models, longitudinal and multilevel models, and item-response models. A distribution-free treatment of calibration and…
ERIC Educational Resources Information Center
Deng, Weiling; Monfils, Lora
2017-01-01
Using simulated data, this study examined the impact of different levels of stringency of the valid case inclusion criterion on item response theory (IRT)-based true score equating over 5 years in the context of K-12 assessment when growth in student achievement is expected. Findings indicate that the use of the most stringent inclusion criterion…
ERIC Educational Resources Information Center
Li, Yanmei
2012-01-01
In a common-item (anchor) equating design, the common items should be evaluated for item parameter drift. Drifted items are often removed. For a test that contains mostly dichotomous items and only a small number of polytomous items, removing some drifted polytomous anchor items may result in anchor sets that no longer resemble mini-versions of…
Using Propensity Scores in Quasi-Experimental Designs to Equate Groups
ERIC Educational Resources Information Center
Lane, Forrest C.; Henson, Robin K.
2010-01-01
Education research rarely lends itself to large scale experimental research and true randomization, leaving the researcher to quasi-experimental designs. The problem with quasi-experimental research is that underlying factors may impact group selection and lead to potentially biased results. One way to minimize the impact of non-randomization is…
1987-12-01
temperatur The Holloncn Power Equation and the VocE. Equation are used tý) describe the true stress;’true strain behavior to failure of individual tests...Hollomon Power Equation (a= Kcn) and the Voce Equation (c=G -[O -ao1exp[-E!A]) are used to describe the true stress / true strain behavior to failure of...6 8 A. MODIFICATIONS/ IMPROVEMENTS IN THE USE OF THE VOCE
Zhao, Yue; Chan, Wai; Lo, Barbara Chuen Yee
2017-04-04
Item response theory (IRT) has been increasingly applied to patient-reported outcome (PRO) measures. The purpose of this study is to apply IRT to examine item properties (discrimination and severity of depressive symptoms), measurement precision and score comparability across five depression measures, which is the first study of its kind in the Chinese context. A clinical sample of 207 Hong Kong Chinese outpatients was recruited. Data analyses were performed including classical item analysis, IRT concurrent calibration and IRT true score equating. The IRT assumptions of unidimensionality and local independence were tested respectively using confirmatory factor analysis and chi-square statistics. The IRT linking assumptions of construct similarity, equity and subgroup invariance were also tested. The graded response model was applied to concurrently calibrate all five depression measures in a single IRT run, resulting in the item parameter estimates of these measures being placed onto a single common metric. IRT true score equating was implemented to perform the outcome score linking and construct score concordances so as to link scores from one measure to corresponding scores on another measure for direct comparability. Findings suggested that (a) symptoms on depressed mood, suicidality and feeling of worthlessness served as the strongest discriminating indicators, and symptoms concerning suicidality, changes in appetite, depressed mood, feeling of worthlessness and psychomotor agitation or retardation reflected high levels of severity in the clinical sample. (b) The five depression measures contributed to various degrees of measurement precision at varied levels of depression. (c) After outcome score linking was performed across the five measures, the cut-off scores led to either consistent or discrepant diagnoses for depression. The study provides additional evidence regarding the psychometric properties and clinical utility of the five depression measures, offers methodological contributions to the appropriate use of IRT in PRO measures, and helps elucidate cultural variation in depressive symptomatology. The approach of concurrently calibrating and linking multiple PRO measures can be applied to the assessment of PROs other than the depression context.
Refraction corrections for surveying
NASA Technical Reports Server (NTRS)
Lear, W. M.
1979-01-01
Optical measurements of range and elevation angle are distorted by the earth's atmosphere. High precision refraction correction equations are presented which are ideally suited for surveying because their inputs are optically measured range and optically measured elevation angle. The outputs are true straight line range and true geometric elevation angle. The 'short distances' used in surveying allow the calculations of true range and true elevation angle to be quickly made using a programmable pocket calculator. Topics covered include the spherical form of Snell's Law; ray path equations; and integrating the equations. Short-, medium-, and long-range refraction corrections are presented in tables.
The Effectiveness of a Rater Training Booklet in Increasing Accuracy of Performance Ratings
1988-04-01
subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...subjects’ ratings were compared for accuracy. The dependent measure was the absolute deviation score of each individual’s rating from the "true score". The...r IS % _. Findings: The absolute deviation scores of each individual’s ratings from the "true score" provided by subject matter experts were analyzed
Jie, Yong-Z; Zhang, Jian-Y; Zhao, Li-H; Ma, Qiu-G; Ji, Cheng
2013-09-25
This study was conducted to evaluate the apparent metabolizable energy (AME) and true metabolizable energy (TME) contents in 30 sources of corn distillers dried grains with solubles (DDGS) in adult roosters, and establish the prediction equations to estimate the AME and TME value based on its chemical composition and color score. Twenty-eight sources of corn DDGS made from several processing plants in 11 provinces of China and others imported from the United States. DDGS were analyzed for their metabolizable energy (ME) contents, measured for color score and chemical composition (crude protein, crude fat, ash, neutral detergent fiber, acid detergent fiber), to predict the equation of ME in DDGS. A precision-fed rooster assay was used, each DDGS sample was tube fed (50 g) to adult roosters. The experiment was conducted as a randomized incomplete block design with 3 periods. Ninety-five adult roosters were used in each period, with 90 being fed the DDGS samples and 5 being fasted to estimate basal endogenous energy losses. Results showed that the AME ranged from 5.93 to 12.19 MJ/kg, TME ranged from 7.28 to 13.54 MJ/kg. Correlations were found between ME and ash content (-0.64, P < 0.01) and between ME and yellowness score (0.39, P < 0.05) of the DDGS samples. Furthermore, the best-fit regression equation for AME content of DDGS based on chemical composition and color score was AME = 6.57111 + 0.51475 GE - 0.10003 NDF + 0.13380 ADF + 0.07057 fat - 0.57029 ash - 0.02437 L (R2 = 0.70). The best-fit regression equation for TME content of DDGS was TME = 7.92283 + 0.51475 GE - 0.10003 NDF + 0.13380 ADF + 0.07057 fat - 0.57029 ash - 0.02437 L (R2 = 0.70). This experiment suggested that measuring the chemical composition and color score of a corn DDGS sample may provide a quality parameter for identifying corn DDGS sources energy digestibility and metabolizable energy content.
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-01-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage. PMID:9718328
The power to detect linkage in complex disease by means of simple LOD-score analyses.
Greenberg, D A; Abreu, P; Hodge, S E
1998-09-01
Maximum-likelihood analysis (via LOD score) provides the most powerful method for finding linkage when the mode of inheritance (MOI) is known. However, because one must assume an MOI, the application of LOD-score analysis to complex disease has been questioned. Although it is known that one can legitimately maximize the maximum LOD score with respect to genetic parameters, this approach raises three concerns: (1) multiple testing, (2) effect on power to detect linkage, and (3) adequacy of the approximate MOI for the true MOI. We evaluated the power of LOD scores to detect linkage when the true MOI was complex but a LOD score analysis assumed simple models. We simulated data from 14 different genetic models, including dominant and recessive at high (80%) and low (20%) penetrances, intermediate models, and several additive two-locus models. We calculated LOD scores by assuming two simple models, dominant and recessive, each with 50% penetrance, then took the higher of the two LOD scores as the raw test statistic and corrected for multiple tests. We call this test statistic "MMLS-C." We found that the ELODs for MMLS-C are >=80% of the ELOD under the true model when the ELOD for the true model is >=3. Similarly, the power to reach a given LOD score was usually >=80% that of the true model, when the power under the true model was >=60%. These results underscore that a critical factor in LOD-score analysis is the MOI at the linked locus, not that of the disease or trait per se. Thus, a limited set of simple genetic models in LOD-score analysis can work well in testing for linkage.
The use of cognitive ability measures as explanatory variables in regression analysis.
Junker, Brian; Schofield, Lynne Steuerle; Taylor, Lowell J
2012-12-01
Cognitive ability measures are often taken as explanatory variables in regression analysis, e.g., as a factor affecting a market outcome such as an individual's wage, or a decision such as an individual's education acquisition. Cognitive ability is a latent construct; its true value is unobserved. Nonetheless, researchers often assume that a test score , constructed via standard psychometric practice from individuals' responses to test items, can be safely used in regression analysis. We examine problems that can arise, and suggest that an alternative approach, a "mixed effects structural equations" (MESE) model, may be more appropriate in many circumstances.
Peripheries of epicycles in the Grahalāghava
NASA Astrophysics Data System (ADS)
Rao, S. Balachandra; Vanaja, V.; Shailaja, M.
2017-12-01
For finding the true positions of the Sun, the Moon and the five planets the Indian classical astronomical texts use the concept of the manda epicycle which accounts for the equation of the centre. In addition, in the case of the five planets (Mercury, Venus, Mars, Jupiter and Saturn) another equation called śīghraphala and the corresponding śīghra epicycle are adopted. This correction corresponds to the transformation of the true heliocentric longitude to the true geocentric longitude in modern astronomy. In some of the popularly used handbooks (karaṇa) instead of giving the mathematical expressions for the above said equations, their discrete numerical values, at intervals of 15 degrees, are given. In the present paper using the data of discrete numerical values we build up continuous functions of periodic terms for the manda and śīghra equations. Further, we obtain the critical points and the maximum values for these two equations.
ERIC Educational Resources Information Center
Doppelt, Jerome E.
1956-01-01
The standard error of measurement as a means for estimating the margin of error that should be allowed for in test scores is discussed. The true score measures the performance that is characteristic of the person tested; the variations, plus and minus, around the true score describe a characteristic of the test. When the standard deviation is used…
ERIC Educational Resources Information Center
Chen, Haiwen
2012-01-01
In this article, linear item response theory (IRT) observed-score equating is compared under a generalized kernel equating framework with Levine observed-score equating for nonequivalent groups with anchor test design. Interestingly, these two equating methods are closely related despite being based on different methodologies. Specifically, when…
Formation Flying Control Implementation in Highly Elliptical Orbits
NASA Technical Reports Server (NTRS)
Capo-Lugo, Pedro A.; Bainum, Peter M.
2009-01-01
The Tschauner-Hempel equations are widely used to correct the separation distance drifts between a pair of satellites within a constellation in highly elliptical orbits [1]. This set of equations was discretized in the true anomaly angle [1] to be used in a digital steady-state hierarchical controller [2]. This controller [2] performed the drift correction between a pair of satellites within the constellation. The objective of a discretized system is to develop a simple algorithm to be implemented in the computer onboard the satellite. The main advantage of the discrete systems is that the computational time can be reduced by selecting a suitable sampling interval. For this digital system, the amount of data will depend on the sampling interval in the true anomaly angle [3]. The purpose of this paper is to implement the discrete Tschauner-Hempel equations and the steady-state hierarchical controller in the computer onboard the satellite. This set of equations is expressed in the true anomaly angle in which a relation will be formulated between the time and the true anomaly angle domains.
[Equating scores using bridging stations on the clinical performance examination].
Yoo, Dong-Mi; Han, Jae-Jin
2013-06-01
This study examined the use of the Tucker linear equating method in producing an individual student's score in 3 groups with bridging stations over 3 consecutive days of the clinical performance examination (CPX) and compared the differences in scoring patterns by bridging number. Data were drawn from 88 examinees from 3 different CPX groups-DAY1, DAY2, and DAY3-each of which comprised of 6 stations. Each group had 3 common stations, and each group had 2 or 3 stations that differed from other groups. DAY1 and DAY3 were equated to DAY2. Equated mean scores and standard deviations were compared with the originals. DAY1 and DAY3 were equated again, and the differences in scores (equated score-raw score) were compared between the 3 sets of equated scores. By equating to DAY2, DAY1 decreased in mean score from 58.188 to 56.549 and in standard deviation from 4.991 to 5.046, and DAY3 fell in mean score from 58.351 to 58.057 and in standard deviation from 5.546 to 5.856, which demonstrates that the scores of examinees in DAY1 and DAY2 were accentuated after use of the equation. The patterns in score differences between the equated sets to DAY1, DAY2, and DAY3 yielded information on the soundness of the equating results from individual and overall comparisons. To generate equated scores between 3 groups on 3 consecutive days of the CPX, we applied the Tucker linear equating method. We also present a method of equating reciprocal days to the anchoring day as much as bridging stations.
Qureshi, Waqas T; Michos, Erin D; Flueckiger, Peter; Blaha, Michael; Sandfort, Veit; Herrington, David M; Burke, Gregory; Yeboah, Joseph
2016-09-01
The increase in statin eligibility by the new cholesterol guidelines is mostly driven by the Pooled Cohort Equation (PCE) criterion (≥7.5% 10-year PCE). The impact of replacing the PCE with either the modified Framingham Risk Score (FRS) or the Systematic Coronary Risk Evaluation (SCORE) on assessment of atherosclerotic cardiovascular disease (ASCVD) risk assessment and statin eligibility remains unknown. We assessed the comparative benefits of using the PCE, FRS, and SCORE for ASCVD risk assessment in the Multi-Ethnic Study of Atherosclerosis. Of 6,815 participants, 654 (mean age 61.4 ± 10.3; 47.1% men; 37.1% whites; 27.2% blacks; 22.3% Hispanics; 12.0% Chinese-Americans) were included in analysis. Area under the curve (AUC) and decision curve analysis were used to compare the 3 risk scores. Decision curve analysis is the plot of net benefit versus probability thresholds; net benefit = true positive rate - (false positive rate × weighting factor). Weighting factor = Threshold probability/1 - threshold probability. After a median of 8.6 years, 342 (6.0%) ASCVD events (myocardial infarction, coronary heart disease death, fatal or nonfatal stroke) occurred. All 4 risk scores had acceptable discriminative ability for incident ASCVD events; (AUC [95% CI] PCE: 0.737 [0.713 to 0.762]; FRS: 0.717 [0.691 to 0.743], SCORE (high risk) 0.722 [0.696 to 0.747], and SCORE (low risk): 0.721 [0.696 to 0.746]. At the ASCVD risk threshold recommended for statin eligibility for primary prevention (≥7.5%), the PCE provides the best net benefit. Replacing the PCE with the SCORE (high), SCORE (low) and FRS results in a 2.9%, 8.9%, and 17.1% further increase in statin eligibility. The PCE has the best discrimination and net benefit for primary ASCVD risk assessment in a US-based multiethnic cohort compared with the SCORE or the FRS. Copyright © 2016 Elsevier Inc. All rights reserved.
Confidence Intervals for True Scores Using the Skew-Normal Distribution
ERIC Educational Resources Information Center
Garcia-Perez, Miguel A.
2010-01-01
A recent comparative analysis of alternative interval estimation approaches and procedures has shown that confidence intervals (CIs) for true raw scores determined with the Score method--which uses the normal approximation to the binomial distribution--have actual coverage probabilities that are closest to their nominal level. It has also recently…
Grima, R
2010-07-21
Chemical master equations provide a mathematical description of stochastic reaction kinetics in well-mixed conditions. They are a valid description over length scales that are larger than the reactive mean free path and thus describe kinetics in compartments of mesoscopic and macroscopic dimensions. The trajectories of the stochastic chemical processes described by the master equation can be ensemble-averaged to obtain the average number density of chemical species, i.e., the true concentration, at any spatial scale of interest. For macroscopic volumes, the true concentration is very well approximated by the solution of the corresponding deterministic and macroscopic rate equations, i.e., the macroscopic concentration. However, this equivalence breaks down for mesoscopic volumes. These deviations are particularly significant for open systems and cannot be calculated via the Fokker-Planck or linear-noise approximations of the master equation. We utilize the system-size expansion including terms of the order of Omega(-1/2) to derive a set of differential equations whose solution approximates the true concentration as given by the master equation. These equations are valid in any open or closed chemical reaction network and at both the mesoscopic and macroscopic scales. In the limit of large volumes, the effective mesoscopic rate equations become precisely equal to the conventional macroscopic rate equations. We compare the three formalisms of effective mesoscopic rate equations, conventional rate equations, and chemical master equations by applying them to several biochemical reaction systems (homodimeric and heterodimeric protein-protein interactions, series of sequential enzyme reactions, and positive feedback loops) in nonequilibrium steady-state conditions. In all cases, we find that the effective mesoscopic rate equations can predict very well the true concentration of a chemical species. This provides a useful method by which one can quickly determine the regions of parameter space in which there are maximum differences between the solutions of the master equation and the corresponding rate equations. We show that these differences depend sensitively on the Fano factors and on the inherent structure and topology of the chemical network. The theory of effective mesoscopic rate equations generalizes the conventional rate equations of physical chemistry to describe kinetics in systems of mesoscopic size such as biological cells.
Small-Sample Equating with Prior Information. Research Report. ETS RR-09-25
ERIC Educational Resources Information Center
Livingston, Samuel A.; Lewis, Charles
2009-01-01
This report proposes an empirical Bayes approach to the problem of equating scores on test forms taken by very small numbers of test takers. The equated score is estimated separately at each score point, making it unnecessary to model either the score distribution or the equating transformation. Prior information comes from equatings of other…
Refraction corrections for surveying
NASA Technical Reports Server (NTRS)
Lear, W. M.
1980-01-01
Optical measurements of range and elevation angles are distorted by refraction of Earth's atmosphere. Theoretical discussion of effect, along with equations for determining exact range and elevation corrections, is presented in report. Potentially useful in optical site surveying and related applications, analysis is easily programmed on pocket calculator. Input to equation is measured range and measured elevation; output is true range and true elevation.
Belmares, Jaime; Gerding, Dale N; Parada, Jorge P; Miskevics, Scott; Weaver, Frances; Johnson, Stuart
2007-12-01
To determine the response rate of Clostridium difficile disease (CDD) to treatment with metronidazole and assess a scoring system to predict response to treatment with metronidazole when applied at the time of CDD diagnosis. Retrospective review of patients with CDD who received primary treatment with metronidazole. We defined success as diarrhea resolution within 6 days of therapy. A CDD score was defined prospectively using variables suggested to correlate with disease severity. Among 102 evaluable patients, 72 had a successful response (70.6%). Twenty-one of the remaining 30 patients eventually responded to metronidazole, but required longer treatment, leaving 9 'true failures'. The mean CDD score was higher among true failures (2.89+/-1.4) than among all metronidazole responders (0.77+/-1.0) (p<.0001). The score was greater than 2 in 67% of true failures and 2 or less in 94% of metronidazole responders. Leukocytosis and abnormal CT scan findings were individual factors associated with a higher risk of metronidazole failure. Only 71% of CDD patients responded to metronidazole within 6 days, but the overall response rate was 91%. A CDD score greater than 2 was associated with metronidazole failure in 6 of 9 true failures. The CDD score will require prospective validation.
Online gaming in the context of social anxiety.
Lee, Bianca W; Leeson, Peter R C
2015-06-01
In 2014, over 23 million individuals were playing massive multiplayer online role-playing games (MMORPGs). In light of the framework provided by Davis's (2001) cognitive-behavioral model of pathological Internet use, social anxiety, expressions of true self, and perceived in-game and face-to-face social support were examined as predictors of Generalized Problematic Internet Use Scale (GPIUS) scores and hours spent playing MMORPGs per week. Data were collected from adult MMORPG players via an online survey (N = 626). Using structural equation modeling, the hypothesized model was tested on 1 half of the sample (N = 313) and then retested on the other half of the sample. The results indicated that the hypothesized model fit the data well in both samples. Specifically, expressing true self in game, higher levels of social anxiety, larger numbers of in-game social supports, and fewer supportive face-to-face relationships were significant predictors of higher GPIUS scores, and the number of in-game supports was significantly associated with time spent playing. The current study provides clinicians and researchers with a deeper understanding of MMORPG use by being the first to apply, test, and replicate a theory-driven model across 2 samples of MMORPG players. In addition, the present findings suggest that a psychometric measure of MMORPG usage is more indicative of players' psychological and social well-being than is time spent playing these games. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
Statistical Assessment of Estimated Transformations in Observed-Score Equating
ERIC Educational Resources Information Center
Wiberg, Marie; González, Jorge
2016-01-01
Equating methods make use of an appropriate transformation function to map the scores of one test form into the scale of another so that scores are comparable and can be used interchangeably. The equating literature shows that the ways of judging the success of an equating (i.e., the score transformation) might differ depending on the adopted…
ERIC Educational Resources Information Center
Tan, Xuan; Ricker, Kathryn L.; Puhan, Gautam
2010-01-01
This study examines the differences in equating outcomes between two trend score equating designs resulting from two different scoring strategies for trend scoring when operational constructed-response (CR) items are double-scored--the single group (SG) design, where each trend CR item is double-scored, and the nonequivalent groups with anchor…
Ahn, Byeong-Cheol; Lee, Won Kee; Jeong, Shin Young; Lee, Sang-Woo; Lee, Jaetae
2013-01-01
We investigated the analytical interference of antithyroglobulin antibody (TgAb) to thyroglobulin (Tg) measurement and tried to convert measured Tg concentration to true Tg concentration using a mathematical equation which includes a concentration of TgAb. Methods. Tg was measured by immunoradiometric assay and TgAb by radioimmunoassy. Experimental samples were produced by mixing Tg and TgAb standard solutions or mixing patients' serum with high Tg or high TgAb. Mathematical equations for prediction of expected Tg concentration with measured Tg and TgAb concentrations were deduced. The Tg concentration calculated using the equations was compared with the expected Tg concentration. Results. Measured Tg concentrations of samples having high TgAb were significantly lower than their expected Tg concentration. Magnitude of TgAb interference with the Tg assay showed a positive correlation with concentration of TgAb. Mathematical equations for estimation of expected Tg concentration using measured Tg and TgAb concentrations were successfully deduced and the calculated Tg concentration showed excellent correlation with expected Tg concentration. Conclusions. A mathematic equation for estimation of true Tg concentration using measured Tg and TgAb concentration was deduced. Tg concentration calculated by use of the equation might be more valuable than measured Tg concentration in patients with differentiated thyroid cancer.
Estimation of true height: a study in population-specific methods among young South African adults.
Lahner, Christen Renée; Kassier, Susanna Maria; Veldman, Frederick Johannes
2017-02-01
To investigate the accuracy of arm-associated height estimation methods in the calculation of true height compared with stretch stature in a sample of young South African adults. A cross-sectional descriptive design was employed. Pietermaritzburg, Westville and Durban, KwaZulu-Natal, South Africa, 2015. Convenience sample (N 900) aged 18-24 years, which included an equal number of participants from both genders (150 per gender) stratified across race (Caucasian, Black African and Indian). Continuous variables that were investigated included: (i) stretch stature; (ii) total armspan; (iii) half-armspan; (iv) half-armspan ×2; (v) demi-span; (vi) demi-span gender-specific equation; (vii) WHO equation; and (viii) WHO-adjusted equations; as well as categorization according to gender and race. Statistical analysis was conducted using IBM SPSS Statistics Version 21.0. Significant correlations were identified between gender and height estimation measurements, with males being anatomically larger than females (P<0·001). Significant differences were documented when study participants were stratified according to race and gender (P<0·001). Anatomical similarities were noted between Indians and Black Africans, whereas Caucasians were anatomically different from the other race groups. Arm-associated height estimation methods were able to estimate true height; however, each method was specific to each gender and race group. Height can be calculated by using arm-associated measurements. Although universal equations for estimating true height exist, for the enhancement of accuracy, the use of equations that are race-, gender- and population-specific should be considered.
The Forced Hard Spring Equation
ERIC Educational Resources Information Center
Fay, Temple H.
2006-01-01
Through numerical investigations, various examples of the Duffing type forced spring equation with epsilon positive, are studied. Since [epsilon] is positive, all solutions to the associated homogeneous equation are periodic and the same is true with the forcing applied. The damped equation exhibits steady state trajectories with the interesting…
NASA Astrophysics Data System (ADS)
Christopher, J.; Choudhary, B. K.; Isaac Samuel, E.; Mathew, M. D.; Jayakumar, T.
2012-01-01
Tensile flow behaviour of P9 steel with different silicon content has been examined in the framework of Hollomon, Ludwik, Swift, Ludwigson and Voce relationships for a wide temperature range (300-873 K) at a strain rate of 1.3 × 10 -3 s -1. Ludwigson equation described true stress ( σ)-true plastic strain ( ɛ) data most accurately in the range 300-723 K. At high temperatures (773-873 K), Ludwigson equation reduces to Hollomon equation. The variations of instantaneous work hardening rate ( θ = dσ/ dɛ) and θσ with stress indicated two-stage work hardening behaviour. True stress-true plastic strain, flow parameters, θ vs. σ and θσ vs. σ with respect to temperature exhibited three distinct temperature regimes and displayed anomalous behaviour due to dynamic strain ageing at intermediate temperatures. Rapid decrease in flow stress and flow parameters, and rapid shift in θ- σ and θσ- σ towards lower stresses with increase in temperature indicated dominance of dynamic recovery at high temperatures.
ERIC Educational Resources Information Center
Ozdemir, Burhanettin
2017-01-01
The purpose of this study is to equate Trends in International Mathematics and Science Study (TIMSS) mathematics subtest scores obtained from TIMSS 2011 to scores obtained from TIMSS 2007 form with different nonlinear observed score equating methods under Non-Equivalent Anchor Test (NEAT) design where common items are used to link two or more test…
1981-02-01
monotonic increasing function of true ability or performance score. A cumulative probability function is * then very convenient for describiny; one’s...possible outcomes such as test scores, grade-point averages or other common outcome variables. Utility is usually a monotonic increasing function of true ...r(0) is negative for 8 <i and positive for 0 > M, U(o) is risk-prone for low 0 values and risk-averse for high 0 values. This property is true for
True amplitude wave equation migration arising from true amplitude one-way wave equations
NASA Astrophysics Data System (ADS)
Zhang, Yu; Zhang, Guanquan; Bleistein, Norman
2003-10-01
One-way wave operators are powerful tools for use in forward modelling and inversion. Their implementation, however, involves introduction of the square root of an operator as a pseudo-differential operator. Furthermore, a simple factoring of the wave operator produces one-way wave equations that yield the same travel times as the full wave equation, but do not yield accurate amplitudes except for homogeneous media and for almost all points in heterogeneous media. Here, we present augmented one-way wave equations. We show that these equations yield solutions for which the leading order asymptotic amplitude as well as the travel time satisfy the same differential equations as the corresponding functions for the full wave equation. Exact representations of the square-root operator appearing in these differential equations are elusive, except in cases in which the heterogeneity of the medium is independent of the transverse spatial variables. Here, we address the fully heterogeneous case. Singling out depth as the preferred direction of propagation, we introduce a representation of the square-root operator as an integral in which a rational function of the transverse Laplacian appears in the integrand. This allows us to carry out explicit asymptotic analysis of the resulting one-way wave equations. To do this, we introduce an auxiliary function that satisfies a lower dimensional wave equation in transverse spatial variables only. We prove that ray theory for these one-way wave equations leads to one-way eikonal equations and the correct leading order transport equation for the full wave equation. We then introduce appropriate boundary conditions at z = 0 to generate waves at depth whose quotient leads to a reflector map and an estimate of the ray theoretical reflection coefficient on the reflector. Thus, these true amplitude one-way wave equations lead to a 'true amplitude wave equation migration' (WEM) method. In fact, we prove that applying the WEM imaging condition to these newly defined wavefields in heterogeneous media leads to the Kirchhoff inversion formula for common-shot data when the one-way wavefields are replaced by their ray theoretic approximations. This extension enhances the original WEM method. The objective of that technique was a reflector map, only. The underlying theory did not address amplitude issues. Computer output obtained using numerically generated data confirms the accuracy of this inversion method. However, there are practical limitations. The observed data must be a solution of the wave equation. Therefore, the data over the entire survey area must be collected from a single common-shot experiment. Multi-experiment data, such as common-offset data, cannot be used with this method as currently formulated. Research on extending the method is ongoing at this time.
Kernel Equating Under the Non-Equivalent Groups With Covariates Design
Bränberg, Kenny
2015-01-01
When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests. PMID:29881012
Kernel Equating Under the Non-Equivalent Groups With Covariates Design.
Wiberg, Marie; Bränberg, Kenny
2015-07-01
When equating two tests, the traditional approach is to use common test takers and/or common items. Here, the idea is to use variables correlated with the test scores (e.g., school grades and other test scores) as a substitute for common items in a non-equivalent groups with covariates (NEC) design. This is performed in the framework of kernel equating and with an extension of the method developed for post-stratification equating in the non-equivalent groups with anchor test design. Real data from a college admissions test were used to illustrate the use of the design. The equated scores from the NEC design were compared with equated scores from the equivalent group (EG) design, that is, equating with no covariates as well as with equated scores when a constructed anchor test was used. The results indicate that the NEC design can produce lower standard errors compared with an EG design. When covariates were used together with an anchor test, the smallest standard errors were obtained over a large range of test scores. The results obtained, that an EG design equating can be improved by adjusting for differences in test score distributions caused by differences in the distribution of covariates, are useful in practice because not all standardized tests have anchor tests.
More Issues in Observed-Score Equating
ERIC Educational Resources Information Center
van der Linden, Wim J.
2013-01-01
This article is a response to the commentaries on the position paper on observed-score equating by van der Linden (this issue). The response focuses on the more general issues in these commentaries, such as the nature of the observed scores that are equated, the importance of test-theory assumptions in equating, the necessity to use multiple…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kauweloa, Kevin I., E-mail: Kauweloa@livemail.uthscsa.edu; Gutierrez, Alonso N.; Bergamo, Angelo
2014-07-15
Purpose: There is a growing interest in the radiation oncology community to use the biological effective dose (BED) rather than the physical dose (PD) in treatment plan evaluation and optimization due to its stronger correlation with radiobiological effects. Radiotherapy patients may receive treatments involving a single only phase or multiple phases (e.g., primary and boost). Since most treatment planning systems cannot calculate the analytical BED distribution in multiphase treatments, an approximate multiphase BED expression, which is based on the total physical dose distribution, has been used. The purpose of this paper is to reveal the mathematical properties of the approximatemore » BED formulation, relative to the true BED. Methods: The mathematical properties of the approximate multiphase BED equation are analyzed and evaluated. In order to better understand the accuracy of the approximate multiphase BED equation, the true multiphase BED equation was derived and the mathematical differences between the true and approximate multiphase BED equations were determined. The magnitude of its inaccuracies under common clinical circumstances was also studied. All calculations were performed on a voxel-by-voxel basis using the three-dimensional dose matrices. Results: Results showed that the approximate multiphase BED equation is accurate only when the dose-per-fractions (DPFs) in both the first and second phases are equal, which occur when the dose distribution does not significantly change between the phases. In the case of heterogeneous dose distributions, which significantly vary between the phases, there are fewer occurrences of equal DPFs and hence the inaccuracy of the approximate multiphase BED is greater. These characteristics are usually seen in the dose distributions being delivered to organs at risk rather than to targets. Conclusions: The finding of this study indicates that the true multiphase BED equation should be implemented in the treatment planning systems due to the inconsistent accuracy of the approximate multiphase BED equation in most of the clinical situations.« less
The Kernel Levine Equipercentile Observed-Score Equating Function. Research Report. ETS RR-13-38
ERIC Educational Resources Information Center
von Davier, Alina A.; Chen, Haiwen
2013-01-01
In the framework of the observed-score equating methods for the nonequivalent groups with anchor test design, there are 3 fundamentally different ways of using the information provided by the anchor scores to equate the scores of a new form to those of an old form. One method uses the anchor scores as a conditioning variable, such as the Tucker…
Stochastic Processes as True-Score Models for Highly Speeded Mental Tests.
ERIC Educational Resources Information Center
Moore, William E.
The previous theoretical development of the Poisson process as a strong model for the true-score theory of mental tests is discussed, and additional theoretical properties of the model from the standpoint of individual examinees are developed. The paper introduces the Erlang process as a family of test theory models and shows in the context of…
Characterization of Tensile Deformation in AZ91D Mg Alloy Castings
NASA Astrophysics Data System (ADS)
Űnal, Ogün; Tiryakioǧlu, Murat
AZ91 cast Mg alloy specimens in T4 and T6 tempers have been tested in tension. True stress — true plastic strain relationship has been characterized by evaluating the fits to four constitutive equations. Moreover, work hardening behavior in both tempers has been investigated and how well the four constitutive equation can model this behavior has been tested. The effects of temper and structural quality on tensile properties and work hardening are discussed in the paper.
[Considerations when using creatinine as a measure of kidney function].
Drion, I Iefke; Fokkert, M J Marion; Bilo, H J G Henk
2013-01-01
Reported serum creatinine concentrations can sometimes vary considerably, even when the renal function does less so or even not. This variation is partly due to true changes in actual serum concentration, and partly due to interferences in the measurement technique, thus not reflecting a true change in concentration. Increased or decreased endogenous creatinine production, ingested creatinine sources through meat eating or certain creatine formulations, and interference by either browning of chromogenic substances in Jaffe measurement techniques or promotors and inhibitors of enzymatic reaction methods do play a role. Reliable serum creatinine measurements are needed for renal function estimating equations. In screening circumstances and daily practice, chronic kidney disease staging is based on these estimated glomerular filtration rate values. Given the possible influences on reported serum creatinine concentrations, it is important for health care workers to remain critical when interpreting outcomes of renal function estimating equations and to not see every reported result based on an equation as a true reflection of renal function.
Analysis of Covariance: Is It the Appropriate Model to Study Change?
ERIC Educational Resources Information Center
Marston, Paul T., Borich, Gary D.
The four main approaches to measuring treatment effects in schools; raw gain, residual gain, covariance, and true scores; were compared. A simulation study showed true score analysis produced a large number of Type-I errors. When corrected for this error, this method showed the least power of the four. This outcome was clearly the result of the…
A Bayesian Nonparametric Approach to Test Equating
ERIC Educational Resources Information Center
Karabatsos, George; Walker, Stephen G.
2009-01-01
A Bayesian nonparametric model is introduced for score equating. It is applicable to all major equating designs, and has advantages over previous equating models. Unlike the previous models, the Bayesian model accounts for positive dependence between distributions of scores from two tests. The Bayesian model and the previous equating models are…
Caselli, Michele; Zuliani, Giovanni; Cassol, Francesca; Fusetti, Nadia; Zeni, Elena; Lo Cascio, Natalina; Soavi, Cecilia; Gullini, Sergio
2014-12-07
To investigate the clinical response of gastro-esophageal reflux disease (GERD) symptoms to exclusion diets based on food intolerance tests. A double blind, randomized, controlled pilot trial was performed in 38 GERD patients partially or completely non-responders to proton pump inhibitors (PPI) treatment. Fasting blood samples from each patients were obtained; leukocytotoxic test was performed by incubating the blood with a panel of 60 food items to be tested. The reaction of leukocytes (rounding, vacuolization, lack of movement, flattening, fragmentation or disintegration of cell wall) was then evaluated by optical microscopy and rated as follows: level 0 = negative, level 1 = slightly positive, level 2 = moderately positive, and level 3 = highly positive. A "true" diet excluding food items inducing moderate-severe reactions, and a "control" diet including them was developed for each patient. Then, twenty patients received the "true" diet and 18 the "control" diet; after one month (T1) symptoms severity was scored by the GERD impact scale (GIS). Hence, patients in the "control" group were switched to the "true" diet, and symptom severity was re-assessed after three months (T2). At baseline (T0) the mean GIS global score was 6.68 (range: 5-12) with no difference between "true" and control group (6.6 ± 1.19 vs 6.7 ± 1.7). All patients reacted moderately/severely to at least 1 food (range: 5-19), with a significantly greater number of food substances inducing reaction in controls compared with the "true" diet group (11.6 vs 7.0, P < 0.001). Food items more frequently involved were milk, lettuce, brewer's yeast, pork, coffee, rice, sole asparagus, and tuna, followed by eggs, tomato, grain, shrimps, and chemical yeast. At T1 both groups displayed a reduction of GIS score ("true" group 3.3 ± 1.7, -50%, P = 0.001; control group 4.9 ± 2.8, -26.9%, P = 0.02), although the GIS score was significantly lower in "true" vs "control" group (P = 0.04). At T2, after the diet switch, the "control" group showed a further reduction in GIS score (2.7 ± 1.9, -44.9%, P = 0.01), while the "true" group did not (2.6 ± 1.8, -21.3%, P = 0.19), so that the GIS scores didn't differ between the two groups. Our results suggest that food intolerance may play a role in GERD symptoms development, and leucocytotoxic test-based exclusion diets may be a possible therapeutic approach when PPI are not effective or indicated.
Conditional Standard Errors of Measurement for Scale Scores.
ERIC Educational Resources Information Center
Kolen, Michael J.; And Others
1992-01-01
A procedure is described for estimating the reliability and conditional standard errors of measurement of scale scores incorporating the discrete transformation of raw scores to scale scores. The method is illustrated using a strong true score model, and practical applications are described. (SLD)
Intellectual factors in false memories of patients with schizophrenia.
Zhu, Bi; Chen, Chuansheng; Loftus, Elizabeth F; Dong, Qi; Lin, Chongde; Li, Jun
2018-07-01
The current study explored the intellectual factors in false memories of 139 patients with schizophrenia, using a recognition task and an IQ test. The full-scale IQ score of the participants ranged from 57 to 144 (M = 100, SD = 14). The full IQ score had a negative correlation with false recognition in patients with schizophrenia, and positive correlations with high-confidence true recognition and discrimination rates. Further analyses with the subtests' scores revealed that false recognition was negatively correlated with scores of performance IQ (and one of its subtests: picture arrangement), whereas true recognition was positively correlated with scores of verbal IQ (and two of its subtests: information and digit span). High-IQ patients had less false recognition (overall or high-confidence false recognition), more high-confidence true recognition, and higher discrimination abilities than those with low IQ. These findings contribute to a better understanding of the cognitive mechanism in false memory of patients with schizophrenia, and are of practical relevance to the evaluation of memory reliability in patients with different intellectual levels. Copyright © 2018 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Chen, Haiwen; Holland, Paul
2010-01-01
In this paper, we develop a new curvilinear equating for the nonequivalent groups with anchor test (NEAT) design under the assumption of the classical test theory model, that we name curvilinear Levine observed score equating. In fact, by applying both the kernel equating framework and the mean preserving linear transformation of…
Banik, Suman Kumar; Bag, Bidhan Chandra; Ray, Deb Shankar
2002-05-01
Traditionally, quantum Brownian motion is described by Fokker-Planck or diffusion equations in terms of quasiprobability distribution functions, e.g., Wigner functions. These often become singular or negative in the full quantum regime. In this paper a simple approach to non-Markovian theory of quantum Brownian motion using true probability distribution functions is presented. Based on an initial coherent state representation of the bath oscillators and an equilibrium canonical distribution of the quantum mechanical mean values of their coordinates and momenta, we derive a generalized quantum Langevin equation in c numbers and show that the latter is amenable to a theoretical analysis in terms of the classical theory of non-Markovian dynamics. The corresponding Fokker-Planck, diffusion, and Smoluchowski equations are the exact quantum analogs of their classical counterparts. The present work is independent of path integral techniques. The theory as developed here is a natural extension of its classical version and is valid for arbitrary temperature and friction (the Smoluchowski equation being considered in the overdamped limit).
The Missing Data Assumptions of the NEAT Design and Their Implications for Test Equating
ERIC Educational Resources Information Center
Sinharay, Sandip; Holland, Paul W.
2010-01-01
The Non-Equivalent groups with Anchor Test (NEAT) design involves "missing data" that are "missing by design." Three nonlinear observed score equating methods used with a NEAT design are the "frequency estimation equipercentile equating" (FEEE), the "chain equipercentile equating" (CEE), and the "item-response-theory observed-score-equating" (IRT…
Do Examinees Understand Score Reports for Alternate Methods of Scoring Computer Based Tests?
ERIC Educational Resources Information Center
Whittaker, Tiffany A.; Williams, Natasha J.; Dodd, Barbara G.
2011-01-01
This study assessed the interpretability of scaled scores based on either number correct (NC) scoring for a paper-and-pencil test or one of two methods of scoring computer-based tests: an item pattern (IP) scoring method and a method based on equated NC scoring. The equated NC scoring method for computer-based tests was proposed as an alternative…
Lowe, C; Aiken, A; Day, A G; Depew, W; Vanner, S J
2017-07-01
Irritable bowel syndrome (IBS) patients increasingly seek out acupuncture therapy to alleviate symptoms, but it is unclear whether the benefit is due to a treatment-specific effect or a placebo response. This study examined whether true acupuncture is superior to sham acupuncture in relieving IBS symptoms and whether benefits were linked to purported acupuncture mechanisms. A double blind sham controlled acupuncture study was conducted with Rome I IBS patients receiving twice weekly true acupuncture for 4 weeks (n=43) or sham acupuncture (n=36). Patients returned at 12 weeks for a follow-up review. The primary endpoint of success as determined by whether patients met or exceeded their established goal for percentage symptom improvement. Questionnaires were completed for symptom severity scores, SF-36 and IBS-36 QOL tools, McGill pain score, and Pittsburg Sleep Quality Index. A subset of patients underwent barostat measurements of rectal sensation at baseline and 4 weeks. A total of 53% in the true acupuncture group met their criteria for a successful treatment intervention, but this did not differ significantly from the sham group (42%). IBS symptom scores similarly improved in both groups. Scores also improved in the IBS-36, SF-36, and the Pittsburg Sleep Quality Index, but did not differ between groups. Rectal sensory thresholds were increased in both groups following treatment and pain scores decreased; however, these changes were similar between groups. The lack of differences in symptom outcomes between sham and true treatment acupuncture suggests that acupuncture does not have a specific treatment effect in IBS. © 2017 John Wiley & Sons Ltd.
Liu, Chengyu; Zhao, Lina; Tang, Hong; Li, Qiao; Wei, Shoushui; Li, Jianqing
2016-08-01
False alarm (FA) rates as high as 86% have been reported in intensive care unit monitors. High FA rates decrease quality of care by slowing staff response times while increasing patient burdens and stresses. In this study, we proposed a rule-based and multi-channel information fusion method for accurately classifying the true or false alarms for five life-threatening arrhythmias: asystole (ASY), extreme bradycardia (EBR), extreme tachycardia (ETC), ventricular tachycardia (VTA) and ventricular flutter/fibrillation (VFB). The proposed method consisted of five steps: (1) signal pre-processing, (2) feature detection and validation, (3) true/false alarm determination for each channel, (4) 'real-time' true/false alarm determination and (5) 'retrospective' true/false alarm determination (if needed). Up to four signal channels, that is, two electrocardiogram signals, one arterial blood pressure and/or one photoplethysmogram signal were included in the analysis. Two events were set for the method validation: event 1 for 'real-time' and event 2 for 'retrospective' alarm classification. The results showed that 100% true positive ratio (i.e. sensitivity) on the training set were obtained for ASY, EBR, ETC and VFB types, and 94% for VTA type, accompanied by the corresponding true negative ratio (i.e. specificity) results of 93%, 81%, 78%, 85% and 50% respectively, resulting in the score values of 96.50, 90.70, 88.89, 92.31 and 64.90, as well as with a final score of 80.57 for event 1 and 79.12 for event 2. For the test set, the proposed method obtained the score of 88.73 for ASY, 77.78 for EBR, 89.92 for ETC, 67.74 for VFB and 61.04 for VTA types, with the final score of 71.68 for event 1 and 75.91 for event 2.
ERIC Educational Resources Information Center
Grant, Mary C.; Zhang, Lilly; Damiano, Michele
2009-01-01
This study investigated kernel equating methods by comparing these methods to operational equatings for two tests in the SAT Subject Tests[TM] program. GENASYS (ETS, 2007) was used for all equating methods and scaled score kernel equating results were compared to Tucker, Levine observed score, chained linear, and chained equipercentile equating…
ERIC Educational Resources Information Center
Moses, Tim; Liu, Jinghua
2011-01-01
In equating research and practice, equating functions that are smooth are typically assumed to be more accurate than equating functions with irregularities. This assumption presumes that population test score distributions are relatively smooth. In this study, two examples were used to reconsider common beliefs about smoothing and equating. The…
Measurement of stress-strain behaviour of human hair fibres using optical techniques.
Lee, J; Kwon, H J
2013-06-01
Many studies have presented stress-strain relationship of human hair, but most of them have been based on an engineering stress-strain curve, which is not a true representation of stress-strain behaviour. In this study, a more accurate 'true' stress-strain curve of human hair was determined by applying optical techniques to the images of the hair deformed under tension. This was achieved by applying digital image cross-correlation (DIC) to 10× magnified images of hair fibres taken under increasing tension to estimate the strain increments. True strain was calculated by summation of the strain increments according to the theoretical definition of 'true' strain. The variation in diameter with the increase in longitudinal elongation was also measured from the 40× magnified images to estimate the Poisson's ratio and true stress. By combining the true strain and the true stress, a true stress-strain curve could be determined, which demonstrated much higher stress values than the conventional engineering stress-strain curve at the same degree of deformation. Four regions were identified in the true stress-strain relationship and empirical constitutive equations were proposed for each region. Theoretical analysis on the necking condition using the constitutive equations provided the insight into the failure mechanism of human hair. This analysis indicated that local thinning caused by necking does not occur in the hair fibres, but, rather, relatively uniform deformation takes place until final failure (fracture) eventually occurs. © 2012 Society of Cosmetic Scientists and the Société Française de Cosmétologie.
Erratum: Nonlinear Dirac equation solitary waves in external fields [Phys. Rev. E 86, 046602 (2012)
Mertens, Franz G.; Quintero, Niurka R.; Cooper, Fred; ...
2016-05-10
In Sec. IV of our original paper, we assumed a particular conservation law Eq. (4.6), which was true in the absence of external potentials, to derive some particular potentials for which we obtained solutions to the nonlinear Dirac equation (NLDE). Because the conservation law of Eq. (4.6) for the component T 11 of the energy-momentum tensor is not true in the presence of these external potentials, the solutions we found do not satisfy the NLDEs in the presence of these potentials. Thus all the equations from Eq. (4.6) through Eq. (4.44) are not correct, since the exact solutions that followedmore » in that section presumed Eq. (4.6) was true. Also Eqs. (A3)–(A5) are a restatement of Eq. (4.6) and also are not correct. These latter equations are also not used in Sec. V and beyond. The rest of our original paper (starting with Sec. V) was not concerned with exact solutions, rather it was concerned with how the exact solitary-wave solutions to the NLDE in the absence of an external potential responded to being in the presence of various external potentials. This Erratum corrects this mistake.« less
A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study
ERIC Educational Resources Information Center
Kaplan, David; Chen, Jianshen
2012-01-01
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for…
A Brief Report on How Impossible Scores Affect Smoothing and Equating
ERIC Educational Resources Information Center
Puhan, Gautam; von Davier, Alina A.; Gupta, Shaloo
2010-01-01
Equating under the external anchor design is frequently conducted using scaled scores on the anchor test. However, scaled scores often lead to the unique problem of creating zero frequencies in the score distribution because there may not always be a one-to-one correspondence between raw and scaled scores. For example, raw scores of 17 and 18 may…
Group and individual stability of three parenting dimensions
2011-01-01
Background The Parental Bonding Instrument, present self-report version, (PBI-PCh) includes three scales, Warmth, Protectiveness and Authoritarianism, which describe three dimensions of current parenting. The purposes of this study were to (1) evaluate the true and observed stability of these parenting dimensions related to older children, (2) explore the distribution of individual-level change across nine months and (3) test potential parental predictors of parenting instability. Methods Questionnaires were distributed to school-based samples of community parents of both genders (n = 150) twice, nine months apart. These questionnaires measured parenting, parental personality and emotional symptoms. Results Based on 1) stability correlations, 2) true stability estimates from structural equation modeling (SEM) and 3) distribution of individual-level change, Warmth appeared rather stable, although not as stable as personality traits. Protectiveness was moderately stable, whereas Authoritarianism was the least stable parenting dimension among community parents. The differences in stability between the three dimensions were consistent in both estimated true stability and observed stability. Most of the instability in Warmth originated from a minority of parents with personality, childhood care characteristics and lower current parenting warmth. For the Protectiveness dimension, instability was associated with higher Protectiveness scores. Conclusions True instability with all three self-reported parenting dimensions can occur across nine months in a community sample related to older children (7-15), but it may occur with varying degrees among dimensions and subpopulations. The highest stability was found for the Warmth parenting dimension, but a subgroup of "unstably cold" parents could be identified. Stability needs to be taken into account when interpreting longitudinal research on parenting and when planning and evaluating parenting interventions in research and clinical practice. PMID:21609442
ERIC Educational Resources Information Center
Chen, Hanwei; Cui, Zhongmin; Zhu, Rongchun; Gao, Xiaohong
2010-01-01
The most critical feature of a common-item nonequivalent groups equating design is that the average score difference between the new and old groups can be accurately decomposed into a group ability difference and a form difficulty difference. Two widely used observed-score linear equating methods, the Tucker and the Levine observed-score methods,…
Local Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Wiberg, Marie; van der Linden, Wim J.; von Davier, Alina A.
2014-01-01
Three local observed-score kernel equating methods that integrate methods from the local equating and kernel equating frameworks are proposed. The new methods were compared with their earlier counterparts with respect to such measures as bias--as defined by Lord's criterion of equity--and percent relative error. The local kernel item response…
ERIC Educational Resources Information Center
Puhan, Gautam
2013-01-01
The purpose of this study was to demonstrate that the choice of sample weights when defining the target population under poststratification equating can be a critical factor in determining the accuracy of the equating results under a unique equating scenario, known as "rater comparability scoring and equating." The nature of data…
Principles and Practices of Test Score Equating. Research Report. ETS RR-10-29
ERIC Educational Resources Information Center
Dorans, Neil J.; Moses, Tim P.; Eignor, Daniel R.
2010-01-01
Score equating is essential for any testing program that continually produces new editions of a test and for which the expectation is that scores from these editions have the same meaning over time. Particularly in testing programs that help make high-stakes decisions, it is extremely important that test equating be done carefully and accurately.…
Equating Scores from Adaptive to Linear Tests
ERIC Educational Resources Information Center
van der Linden, Wim J.
2006-01-01
Two local methods for observed-score equating are applied to the problem of equating an adaptive test to a linear test. In an empirical study, the methods were evaluated against a method based on the test characteristic function (TCF) of the linear test and traditional equipercentile equating applied to the ability estimates on the adaptive test…
ESEA Title I Linking Project. Final Report.
ERIC Educational Resources Information Center
Holmes, Susan E.
The Rasch model for test score equating was compared with three other equating procedures as methods for implementing the norm referenced method (RMC Model A) of evaluating ESEA Title I projects. The Rasch model and its theoretical limitations were described. The three other equating methods used were: linear observed score equating, linear true…
Item Response Modeling with Sum Scores
ERIC Educational Resources Information Center
Johnson, Timothy R.
2013-01-01
One of the distinctions between classical test theory and item response theory is that the former focuses on sum scores and their relationship to true scores, whereas the latter concerns item responses and their relationship to latent scores. Although item response theory is often viewed as the richer of the two theories, sum scores are still…
Buller, Jerome L; Tetteh, Hassan A
2012-07-01
Evaluation of medical officer performance is a critical leadership role. This study offers a comprehensive evaluation system for military physicians. The Comprehensive Assessment equation (COMPASS equation), a modified Cobb-Douglas equation, was developed to evaluate academic physicians. The COMPASS equation assesses military physicians within five comprehensive dimensions: (1) Clinical (2) Leadership, (3) Educational (4) Administrative, and (5) Research productivity excellence to yield a composite "C.L.E.A.R. Score." The COMPASS equation's fidelity was tested with a cohort of military physicians within the department of Obstetrics and Gynecology in the Capital District Region and a C.L.E.A.R. score was calculated for individual physicians. Mean C.L.E.A.R score was 53.6 +/- 28.8 (range 10.1-98.5). The responsiveness of the model was tested using two hypothetical physician models: "low-performing-faculty" and "super-faculty," and calculated C.L.E.A.R. scores were 6.3 and 153.4, respectively. The C.L.E.A.R. score appears to recognize and assess the performance excellence of military physicians. Weighting measured characteristics of the COMPASS equation can be used to promote organizational priorities. Thus, leaders of military medicine can communicate institutional priorities and inculcate them through use of the COMPASS equation to reward and recognize the activities of military medical officers that are commensurate with institutional goals.
ERIC Educational Resources Information Center
Puhan, Gautam; vonDavier, Alina; Gupta, Shaloo
2008-01-01
Equating under the external anchor design is frequently conducted using scaled scores on the anchor test. However, scaled scores often lead to the unique problem of creating zero frequencies in the score distribution because there may not always be a one-to-one correspondence between raw and scaled scores. For example, raw scores of 17 and 18 may…
NASA Technical Reports Server (NTRS)
Gabrielsen, R. E.; Uenal, A.
1981-01-01
Two dimensional Fredholm integral equations with logarithmic potential kernels are numerically solved. The explicit consequence of these solutions to their true solutions is demonstrated. The results are based on a previous work in which numerical solutions were obtained for Fredholm integral equations of the second kind with continuous kernels.
Estimating True Short-Term Consistency in Vocational Interests: A Longitudinal SEM Approach
ERIC Educational Resources Information Center
Gaudron, Jean-Philippe; Vautier, Stephane
2007-01-01
This study aimed at estimating the correlation between true scores (true consistency) of vocational interest over a short time span in a sample of 1089 adults. Participants were administered 54 items assessing vocational, family, and leisure interests twice over a 1-month period. Responses were analyzed with a multitrait (MT) model, which supposes…
Some Conceptual Issues in Observed-Score Equating
ERIC Educational Resources Information Center
van der Linden, Wim J.
2013-01-01
In spite of all of the technical progress in observed-score equating, several of the more conceptual aspects of the process still are not well understood. As a result, the equating literature struggles with rather complex criteria of equating, lack of a test-theoretic foundation, confusing terminology, and ad hoc analyses. A return to Lord's…
Optimal Bandwidth Selection in Observed-Score Kernel Equating
ERIC Educational Resources Information Center
Häggström, Jenny; Wiberg, Marie
2014-01-01
The selection of bandwidth in kernel equating is important because it has a direct impact on the equated test scores. The aim of this article is to examine the use of double smoothing when selecting bandwidths in kernel equating and to compare double smoothing with the commonly used penalty method. This comparison was made using both an equivalent…
Simmons, Rebecca K.; Coleman, Ruth L.; Price, Hermione C.; Holman, Rury R.; Khaw, Kay-Tee; Wareham, Nicholas J.; Griffin, Simon J.
2009-01-01
OBJECTIVE The purpose of this study was to examine the performance of the UK Prospective Diabetes Study (UKPDS) Risk Engine (version 3) and the Framingham risk equations (2008) in estimating cardiovascular disease (CVD) incidence in three populations: 1) individuals with known diabetes; 2) individuals with nondiabetic hyperglycemia, defined as A1C ≥6.0%; and 3) individuals with normoglycemia defined as A1C <6.0%. RESEARCH DESIGN AND METHODS This was a population-based prospective cohort (European Prospective Investigation of Cancer-Norfolk). Participants aged 40–79 years recruited from U.K. general practices attended a health examination (1993–1998) and were followed for CVD events/death until April 2007. CVD risk estimates were calculated for 10,137 individuals. RESULTS Over 10.1 years, there were 69 CVD events in the diabetes group (25.4%), 160 in the hyperglycemia group (17.7%), and 732 in the normoglycemia group (8.2%). Estimated CVD 10-year risk in the diabetes group was 33 and 37% using the UKPDS and Framingham equations, respectively. In the hyperglycemia group, estimated CVD risks were 31 and 22%, respectively, and for the normoglycemia group risks were 20 and 14%, respectively. There were no significant differences in the ability of the risk equations to discriminate between individuals at different risk of CVD events in each subgroup; both equations overestimated CVD risk. The Framingham equations performed better in the hyperglycemia and normoglycemia groups as they did not overestimate risk as much as the UKPDS Risk Engine, and they classified more participants correctly. CONCLUSIONS Both the UKPDS Risk Engine and Framingham risk equations were moderately effective at ranking individuals and are therefore suitable for resource prioritization. However, both overestimated true risk, which is important when one is using scores to communicate prognostic information to individuals. PMID:19114615
A Two-Step Bayesian Approach for Propensity Score Analysis: Simulations and Case Study.
Kaplan, David; Chen, Jianshen
2012-07-01
A two-step Bayesian propensity score approach is introduced that incorporates prior information in the propensity score equation and outcome equation without the problems associated with simultaneous Bayesian propensity score approaches. The corresponding variance estimators are also provided. The two-step Bayesian propensity score is provided for three methods of implementation: propensity score stratification, weighting, and optimal full matching. Three simulation studies and one case study are presented to elaborate the proposed two-step Bayesian propensity score approach. Results of the simulation studies reveal that greater precision in the propensity score equation yields better recovery of the frequentist-based treatment effect. A slight advantage is shown for the Bayesian approach in small samples. Results also reveal that greater precision around the wrong treatment effect can lead to seriously distorted results. However, greater precision around the correct treatment effect parameter yields quite good results, with slight improvement seen with greater precision in the propensity score equation. A comparison of coverage rates for the conventional frequentist approach and proposed Bayesian approach is also provided. The case study reveals that credible intervals are wider than frequentist confidence intervals when priors are non-informative.
The Truth about Scores Children Achieve on Tests.
ERIC Educational Resources Information Center
Brown, Jonathan R.
1989-01-01
The importance of using the standard error of measurement (SEm) in determining reliability in test scores is emphasized. The SEm is compared to the hypothetical true score for standardized tests, and procedures for calculation of the SEm are explained. (JDD)
Asymptotic Standard Errors of Observed-Score Equating with Polytomous IRT Models
ERIC Educational Resources Information Center
Andersson, Björn
2016-01-01
In observed-score equipercentile equating, the goal is to make scores on two scales or tests measuring the same construct comparable by matching the percentiles of the respective score distributions. If the tests consist of different items with multiple categories for each item, a suitable model for the responses is a polytomous item response…
The fundamental equation of eddy covariance and its application in flux measurements
Lianhong Gu; William J. Massman; Ray Leuning; Stephen G. Pallardy; Tilden Meyers; Paul J. Hanson; Jeffery S. Riggs; Kevin P. Hosman; Bai Yang
2012-01-01
A fundamental equation of eddy covariance (FQEC) is derived that allows the net ecosystem exchange (NEE) Ns of a specified atmospheric constituent s to be measured with the constraint of conservation of any other atmospheric constituent (e.g. N2, argon, or dry air). It is shown that if the condition [equation, see PDF] is true, the conservation of mass can be applied...
Diagnosis and constitutional and laboratory features of Korean girls referred for precocious puberty
Kim, Doosoo; Cho, Sung-Yoon; Maeng, Se-Hyun; Yi, Eun Sang; Jung, Yu Jin; Park, Sung Won; Sohn, Young Bae
2012-01-01
Purpose Precocious puberty is defined as breast development before the age of 8 years in girls. The present study aimed to reveal the diagnosis of Korean girls referred for precocious puberty and to compare the constitutional and endocrinological features among diagnosis groups. Methods The present study used a retrospective chart review of 988 Korean girls who had visited a pediatric endocrinology clinic from 2006 to 2010 for the evaluation of precocious puberty. Study groups comprised fast puberty, true precocious puberty (PP), pseudo PP, premature thelarche, and control. We determined the height standard deviation score (HSDS), weight standard deviation score (WSDS), and body mass index standard deviation score (BMISDS) of each group using the published 2007 Korean growth charts. Hormone tests were performed at our outpatient clinic. Results The PP groups comprised fast puberty (67%), premature thelarche (17%), true PP (15%), and pseudo PP (1%). Advanced bone age and levels of estradiol, basal luteinizing hormone (LH), and peak LH after gonadotropin-releasing hormone stimulation testing were significantly high in the fast puberty and true PP groups compared with the control group. HSDS, WSDS, and BMISDS were significantly higher in the true PP group than in the control group (P<0.05). Conclusion The frequent causes of PP were found to be fast puberty, true PP, and premature thelarche. Furthermore, BMISDS were significantly elevated in the true PP group. Therefore, we emphasize the need for regular follow-up of girls who are heavier or taller than others in the same age group. PMID:23300504
1998-02-01
zero, and has therefore been ignored. The inverse transform of Equation (11) (but ignoring the 5.8x) term, yields Equation (12), which is the...done for TC #1, this is ignored in the results. The inverse transform of Equation (14) (but ignoring the 10x) term, yields Equation (15), which is...2.568r 2.568 0.36 A^ —— + + —i— + + 0.36r (19) s s s s The inverse transform of Equation (19) (but ignoring the 0.36x) term, yields
The Addition of Enhanced Capabilities to NATO GMTIF STANAG 4607 to Support RADARSAT-2 GMTI Data
2007-12-01
However, the cost is a loss in the accuracy of the position specification and its dependence on the particular ellipsoid and/or geoid models used in...platform provides these parameters. Table B-3. Reference Coordinate Systems COORDINATE SYSTEM VALUE Unidentified 0 GEI: Geocentric Equatorial...Inertial, also known as True Equator and True Equinox of Date, True of Date (TOD), ECI, or GCI 1 J2000: Geocentric Equatorial Inertial for epoch J2000.0
Applying new Magee equations for predicting the Oncotype Dx recurrence score.
Sughayer, Maher; Alaaraj, Rolla; Alsughayer, Ahmad
2018-04-24
Breast cancer is one of the most prevalent cancers in women. Oncotype Dx is a multi-gene assay frequently used to predict the recurrence risk for estrogen receptor-positive early breast cancer, with values < 18 considered low risk; ≥ 18 and ≤ 30, intermediate risk; and > 30, high risk. Patients at a high risk for recurrence are more likely to benefit from chemotherapy treatment. In this study, clinicopathological parameters for 37 cases of early breast cancer with available Oncotype Dx results were used to estimate the recurrence score using the three new Magee equations. Correlation studies with Oncotype Dx results were performed. Applying the same cutoff points as Oncotype Dx, patients were categorized into low-, intermediate- and high-risk groups according to their estimated recurrence scores. Pearson correlation coefficient (R) values between estimated and actual recurrence score were 0.73, 0.66, and 0.70 for Magee equations 1, 2 and 3, respectively. The concordance values between actual and estimated recurrence scores were 57.6%, 52.9%, and 57.6% for Magee equations 1, 2 and 3, respectively. Using standard pathologic measures and immunohistochemistry scores in these three linear Magee equations, most low and high recurrence risk cases can be predicted with a strong positive correlation coefficient, high concordance and negligible two-step discordance. Magee equations are user-friendly and can be used to predict the recurrence score in early breast cancer cases.
Are overreferrals on developmental screening tests really a problem?
Glascoe, F P
2001-01-01
Developmental screening tests, even those meeting standards for screening test accuracy, produce numerous false-positive results for 15% to 30% of children. This is thought to produce unnecessary referrals for diagnostic testing or special services and increase the cost of screening programs. To explore whether children who pass screening tests differ in important ways from those who do not and to determine whether children overreferred for testing benefit from the scrutiny of diagnostic testing and treatment planning. Subjects were a national sample of 512 parents and their children (age range of the children, 7 months to 8 years) who participated in validation studies of various screening tests. Psychological examiners adhering to standardized directions obtained informed consent and administered at least 2 developmental screening measures (the Brigance Screens, the Battelle Developmental Inventory Screening Test, the Denver-II, and the Parents' Evaluations of Developmental Status) and a concurrent battery of diagnostic measures, including tests of intelligence, language, and academic achievement (for children aged 2(1/2) years and older). The performance on diagnostic measures of children who failed screening but were not found to have a disability (false positives) was compared with that of children who passed screening and did not have a disability on diagnostic testing (true negatives). Children with false-positive scores performed significantly (P<.001) lower on diagnostic measures than did children with true-negative scores. The false-positive group had scores in adaptive behavior, language, intelligence, and academic achievement that were 9 to 14 points lower than the scores of those in the true-negative group. When viewing the likelihood of scoring below the 25th percentile on diagnostic measures, children with false-positive scores had a relative risk of 2.6 in adaptive behavior (95% confidence interval [CI], 1.67-4.21), 3.1 in language skills (95% CI, 1.90-5.20), 6.7 on intelligence tests (95% CI, 3.28-13.50), and 4.9 on academic measures (95% CI, 2.61-9.28). Overall, 151 (70%) of the children with false-positive results scored below the 25th percentile on 1 or more diagnostic measures (the point at which most children have difficulty benefiting from typical classroom instruction) in contrast with 64 (29%) of the children with true-negative scores (odds ratio, 5.6; 95% CI, 3.73-8.49). Children with false-positive scores were also more likely to be nonwhite and to have parents who had not graduated from high school. Performance differences between children with true-negative scores and children with false-positive scores continued to be significant (P<.001) even after adjusting for sociodemographic differences between groups. Children overreferred for diagnostic testing by developmental screens perform substantially lower than children with true-negative scores on measures of intelligence, language, and academic achievement-the 3 best predictors of school success. These children also carry more psychosocial risk factors, such as limited parental education and minority status. Thus, children with false-positive screening results are an at-risk group for whom diagnostic testing may not be an unnecessary expense but rather a beneficial and needed service that can help focus intervention efforts. Although such testing will not indicate a need for special education placement, it can be useful in identifying children's needs for other programs known to improve language, cognitive, and academic skills, such as Head Start, Title I services, tutoring, private speech-language therapy, and quality day care.
Harasym, Peter H; Woloschuk, Wayne; Cunning, Leslie
2008-12-01
Physician-patient communication is a clinical skill that can be learned and has a positive impact on patient satisfaction and health outcomes. A concerted effort at all medical schools is now directed at teaching and evaluating this core skill. Student communication skills are often assessed by an Objective Structure Clinical Examination (OSCE). However, it is unknown what sources of error variance are introduced into examinee communication scores by various OSCE components. This study primarily examined the effect different examiners had on the evaluation of students' communication skills assessed at the end of a family medicine clerkship rotation. The communication performance of clinical clerks from Classes 2005 and 2006 were assessed using six OSCE stations. Performance was rated at each station using the 28-item Calgary-Cambridge guide. Item Response Theory analysis using a Multifaceted Rasch model was used to partition the various sources of error variance and generate a "true" communication score where the effects of examiner, case, and items are removed. Variance and reliability of scores were as follows: communication scores (.20 and .87), examiner stringency/leniency (.86 and .91), case (.03 and .96), and item (.86 and .99), respectively. All facet scores were reliable (.87-.99). Examiner variance (.86) was more than four times the examinee variance (.20). About 11% of the clerks' outcome status shifted using "true" rather than observed/raw scores. There was large variability in examinee scores due to variation in examiner stringency/leniency behaviors that may impact pass-fail decisions. Exploring the benefits of examiner training and employing "true" scores generated using Item Response Theory analyses prior to making pass/fail decisions are recommended.
Interpreting Linked Psychomotor Performance Scores
ERIC Educational Resources Information Center
Looney, Marilyn A.
2013-01-01
Given that equating/linking applications are now appearing in kinesiology literature, this article provides an overview of the different types of linked test scores: equated, concordant, and predicted. It also addresses the different types of evidence required to determine whether the scores from two different field tests (measuring the same…
Multiplicity Control in Structural Equation Modeling
ERIC Educational Resources Information Center
Cribbie, Robert A.
2007-01-01
Researchers conducting structural equation modeling analyses rarely, if ever, control for the inflated probability of Type I errors when evaluating the statistical significance of multiple parameters in a model. In this study, the Type I error control, power and true model rates of famsilywise and false discovery rate controlling procedures were…
Method for computing a roughness factor for veneer surfaces
Chung-Yun Hse
1972-01-01
Equations for determining the roughness factor (ratio of true surface to apparent area) of rotary-cut veneer were derived from an assumed tracheid model. With data measured on southern pine veneers, the equations indicated that the roughness factor of latewood was near unity, whereas that of earlywood was about 2.
Predicting lumber volume and value of young-growth true firs: user's guide.
Susan Ernst; W.Y. Pong
1982-01-01
Equations are presented for predicting the volume and value of young-growth red, white, and grand firs. Examples of how to use them are also given. These equations were developed on trees less than 140 years old from areas in southern Oregon, northern California, and Idaho.
Sun, Leping
2016-01-01
This paper is concerned with the backward differential formula or BDF methods for a class of nonlinear 2-delay differential algebraic equations. We obtain two sufficient conditions under which the methods are stable and asymptotically stable. At last, examples show that our methods are true.
A Practical Method for Identifying Significant Change Scores
ERIC Educational Resources Information Center
Cascio, Wayne F.; Kurtines, William M.
1977-01-01
A test of significance for identifying individuals who are most influenced by an experimental treatment as measured by pre-post test change score is presented. The technique requires true difference scores, the reliability of obtained differences, and their standard error of measurement. (Author/JKS)
2017-04-03
setup in terms of temporal and spatial discretization . The second component was an extension of existing depth-integrated wave models to describe...equations (Abbott, 1976). Discretization schemes involve numerical dispersion and dissipation that distort the true character of the governing equations...represent a leading-order approximation of the Boussinesq-type equations. Tam and Webb (1993) proposed a wavenumber-based discretization scheme to preserve
Observed Score Linear Equating with Covariates
ERIC Educational Resources Information Center
Branberg, Kenny; Wiberg, Marie
2011-01-01
This paper examined observed score linear equating in two different data collection designs, the equivalent groups design and the nonequivalent groups design, when information from covariates (i.e., background variables correlated with the test scores) was included. The main purpose of the study was to examine the effect (i.e., bias, variance, and…
Effect of misspecification of gene frequency on the two-point LOD score.
Pal, D K; Durner, M; Greenberg, D A
2001-11-01
In this study, we used computer simulation of simple and complex models to ask: (1) What is the penalty in evidence for linkage when the assumed gene frequency is far from the true gene frequency? (2) If the assumed model for gene frequency and inheritance are misspecified in the analysis, can this lead to a higher maximum LOD score than that obtained under the true parameters? Linkage data simulated under simple dominant, recessive, dominant and recessive with reduced penetrance, and additive models, were analysed assuming a single locus with both the correct and incorrect dominance model and assuming a range of different gene frequencies. We found that misspecifying the analysis gene frequency led to little penalty in maximum LOD score in all models examined, especially if the assumed gene frequency was lower than the generating one. Analysing linkage data assuming a gene frequency of the order of 0.01 for a dominant gene, and 0.1 for a recessive gene, appears to be a reasonable tactic in the majority of realistic situations because underestimating the gene frequency, even when the true gene frequency is high, leads to little penalty in the LOD score.
A regularization corrected score method for nonlinear regression models with covariate error.
Zucker, David M; Gorfine, Malka; Li, Yi; Tadesse, Mahlet G; Spiegelman, Donna
2013-03-01
Many regression analyses involve explanatory variables that are measured with error, and failing to account for this error is well known to lead to biased point and interval estimates of the regression coefficients. We present here a new general method for adjusting for covariate error. Our method consists of an approximate version of the Stefanski-Nakamura corrected score approach, using the method of regularization to obtain an approximate solution of the relevant integral equation. We develop the theory in the setting of classical likelihood models; this setting covers, for example, linear regression, nonlinear regression, logistic regression, and Poisson regression. The method is extremely general in terms of the types of measurement error models covered, and is a functional method in the sense of not involving assumptions on the distribution of the true covariate. We discuss the theoretical properties of the method and present simulation results in the logistic regression setting (univariate and multivariate). For illustration, we apply the method to data from the Harvard Nurses' Health Study concerning the relationship between physical activity and breast cancer mortality in the period following a diagnosis of breast cancer. Copyright © 2013, The International Biometric Society.
Scale Drift in Equating on a Test That Employs Cut Scores. Research Report. ETS RR-07-34
ERIC Educational Resources Information Center
Puhan, Gautam
2007-01-01
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It is essential to examine scale drift in a testing program using new forms that are often put on scale through a series of intermediate equatings (known as equating chains). This may cause equating error to accumulate to a point where scale…
Reliability of Total Test Scores When Considered as Ordinal Measurements
ERIC Educational Resources Information Center
Biswas, Ajoy Kumar
2006-01-01
This article studies the ordinal reliability of (total) test scores. This study is based on a classical-type linear model of observed score (X), true score (T), and random error (E). Based on the idea of Kendall's tau-a coefficient, a measure of ordinal reliability for small-examinee populations is developed. This measure is extended to large…
Score Equating and Nominally Parallel Language Tests.
ERIC Educational Resources Information Center
Moy, Raymond
Score equating requires that the forms to be equated are functionally parallel. That is, the two test forms should rank order examinees in a similar fashion. In language proficiency testing situations, this assumption is often put into doubt because of the numerous tests that have been proposed as measures of language proficiency and the…
Cid, Jaime A; von Davier, Alina A
2015-05-01
Test equating is a method of making the test scores from different test forms of the same assessment comparable. In the equating process, an important step involves continuizing the discrete score distributions. In traditional observed-score equating, this step is achieved using linear interpolation (or an unscaled uniform kernel). In the kernel equating (KE) process, this continuization process involves Gaussian kernel smoothing. It has been suggested that the choice of bandwidth in kernel smoothing controls the trade-off between variance and bias. In the literature on estimating density functions using kernels, it has also been suggested that the weight of the kernel depends on the sample size, and therefore, the resulting continuous distribution exhibits bias at the endpoints, where the samples are usually smaller. The purpose of this article is (a) to explore the potential effects of atypical scores (spikes) at the extreme ends (high and low) on the KE method in distributions with different degrees of asymmetry using the randomly equivalent groups equating design (Study I), and (b) to introduce the Epanechnikov and adaptive kernels as potential alternative approaches to reducing boundary bias in smoothing (Study II). The beta-binomial model is used to simulate observed scores reflecting a range of different skewed shapes.
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
Johnson, Susan L; Tabaei, Bahman P; Herman, William H
2005-02-01
To simulate the outcomes of alternative strategies for screening the U.S. population 45-74 years of age for type 2 diabetes. We simulated screening with random plasma glucose (RPG) and cut points of 100, 130, and 160 mg/dl and a multivariate equation including RPG and other variables. Over 15 years, we simulated screening at intervals of 1, 3, and 5 years. All positive screening tests were followed by a diagnostic fasting plasma glucose or an oral glucose tolerance test. Outcomes include the numbers of false-negative, true-positive, and false-positive screening tests and the direct and indirect costs. At year 15, screening every 3 years with an RPG cut point of 100 mg/dl left 0.2 million false negatives, an RPG of 130 mg/dl or the equation left 1.3 million false negatives, and an RPG of 160 mg/dl left 2.8 million false negatives. Over 15 years, the absolute difference between the most sensitive and most specific screening strategy was 4.5 million true positives and 476 million false-positives. Strategies using RPG cut points of 130 mg/dl or the multivariate equation every 3 years identified 17.3 million true positives; however, the equation identified fewer false-positives. The total cost of the most sensitive screening strategy was $42.7 billion and that of the most specific strategy was $6.9 billion. Screening for type 2 diabetes every 3 years with an RPG cut point of 130 mg/dl or the multivariate equation provides good yield and minimizes false-positive screening tests and costs.
Sunderland, Matthew; Batterham, Philip; Calear, Alison; Carragher, Natacha; Baillie, Andrew; Slade, Tim
2018-04-10
There is no standardized approach to the measurement of social anxiety. Researchers and clinicians are faced with numerous self-report scales with varying strengths, weaknesses, and psychometric properties. The lack of standardization makes it difficult to compare scores across populations that utilise different scales. Item response theory offers one solution to this problem via equating different scales using an anchor scale to set a standardized metric. This study is the first to equate several scales for social anxiety disorder. Data from two samples (n=3,175 and n=1,052), recruited from the Australian community using online advertisements, were utilised to equate a network of 11 self-report social anxiety scales via a fixed parameter item calibration method. Comparisons between actual and equated scores for most of the scales indicted a high level of agreement with mean differences <0.10 (equivalent to a mean difference of less than one point on the standardized metric). This study demonstrates that scores from multiple scales that measure social anxiety can be converted to a common scale. Re-scoring observed scores to a common scale provides opportunities to combine research from multiple studies and ultimately better assess social anxiety in treatment and research settings. Copyright © 2018. Published by Elsevier Inc.
NASA Astrophysics Data System (ADS)
Zayed, Elsayed M. E.; Al-Nowehy, Abdul-Ghani; El-Ganaini, Shoukry; Shohib, Reham M. A.
2018-06-01
This note concerns the doubtful Khater method included in the above two papers. We show by simple calculation that Khater method is not true. The solutions of the proposed nonlinear equations in the above two papers are all not true too.
Duffing's Equation and Nonlinear Resonance
ERIC Educational Resources Information Center
Fay, Temple H.
2003-01-01
The phenomenon of nonlinear resonance (sometimes called the "jump phenomenon") is examined and second-order van der Pol plane analysis is employed to indicate that this phenomenon is not a feature of the equation, but rather the result of accumulated round-off error, truncation error and algorithm error that distorts the true bounded solution onto…
A Comparison of Regional and SiteSpecific Volume Estimation Equations
Joe P. McClure; Jana Anderson; Hans T. Schreuder
1987-01-01
Regression equations for volume by region and site class were examined for lobiolly pine. The regressions for the Coastal Plain and Piedmont regions had significantly different slopes. The results shared important practical differences in percentage of confidence intervals containing the true total volume and in percentage of estimates within a specific proportion of...
Kulkarni, H R; Kamal, M M; Arjune, D G
1999-12-01
The scoring system developed by Mair et al. (Acta Cytol 1989;33:809-813) is frequently used to grade the quality of cytology smears. Using a one-factor analytic structural equations model, we demonstrate that the errors in measurement of the parameters used in the Mair scoring system are highly and significantly correlated. We recommend the use of either a multiplicative scoring system, using linear scores, or an additive scoring system, using exponential scores, to correct for the correlated errors. We suggest that the 0, 1, and 2 points used in the Mair scoring system be replaced by 1, 2, and 4, respectively. Using data on fine-needle biopsies of 200 thyroid lesions by both fine-needle aspiration (FNA) and fine-needle capillary sampling (FNC), we demonstrate that our modification of the Mair scoring system is more sensitive and more consistent with the structural equations model. Therefore, we recommend that the modified Mair scoring system be used for classifying the diagnostic adequacy of cytology smears. Diagn. Cytopathol. 1999;21:387-393. Copyright 1999 Wiley-Liss, Inc.
Barth, Amy E.; Stuebing, Karla K.; Fletcher, Jack M.; Cirino, Paul T.; Romain, Melissa; Francis, David; Vaughn, Sharon
2012-01-01
We evaluated the reliability and validity of two oral reading fluency scores for one-minute equated passages: median score and mean score. These scores were calculated from measures of reading fluency administered up to five times over the school year to students in grades 6–8 (n = 1,317). Both scores were highly reliable with strong convergent validity for adequately developing and struggling middle grade readers. These results support the use of either the median or mean score for oral reading fluency assessments for middle grade readers. PMID:23087532
A modified exponential behavioral economic demand model to better describe consumption data.
Koffarnus, Mikhail N; Franck, Christopher T; Stein, Jeffrey S; Bickel, Warren K
2015-12-01
Behavioral economic demand analyses that quantify the relationship between the consumption of a commodity and its price have proven useful in studying the reinforcing efficacy of many commodities, including drugs of abuse. An exponential equation proposed by Hursh and Silberberg (2008) has proven useful in quantifying the dissociable components of demand intensity and demand elasticity, but is limited as an analysis technique by the inability to correctly analyze consumption values of zero. We examined an exponentiated version of this equation that retains all the beneficial features of the original Hursh and Silberberg equation, but can accommodate consumption values of zero and improves its fit to the data. In Experiment 1, we compared the modified equation with the unmodified equation under different treatments of zero values in cigarette consumption data collected online from 272 participants. We found that the unmodified equation produces different results depending on how zeros are treated, while the exponentiated version incorporates zeros into the analysis, accounts for more variance, and is better able to estimate actual unconstrained consumption as reported by participants. In Experiment 2, we simulated 1,000 datasets with demand parameters known a priori and compared the equation fits. Results indicated that the exponentiated equation was better able to replicate the true values from which the test data were simulated. We conclude that an exponentiated version of the Hursh and Silberberg equation provides better fits to the data, is able to fit all consumption values including zero, and more accurately produces true parameter values. (PsycINFO Database Record (c) 2015 APA, all rights reserved).
An Analysis of Test Equating Models for the Alabama High School Graduation Examination.
ERIC Educational Resources Information Center
Glowacki, Margaret L.
The purpose of this study was to determine which equating models are appropriate for the Alabama High School Graduation Examination (AHSGE) by equating two previously administered fall forms for each subject area of the AHSGE and determining whether differences exist in the test score distributions or passing scores resulting from the equating…
Impact of Accumulated Error on Item Response Theory Pre-Equating with Mixed Format Tests
ERIC Educational Resources Information Center
Keller, Lisa A.; Keller, Robert; Cook, Robert J.; Colvin, Kimberly F.
2016-01-01
The equating of tests is an essential process in high-stakes, large-scale testing conducted over multiple forms or administrations. By adjusting for differences in difficulty and placing scores from different administrations of a test on a common scale, equating allows scores from these different forms and administrations to be directly compared…
Using Automated Essay Scores as an Anchor When Equating Constructed Response Writing Tests
ERIC Educational Resources Information Center
Almond, Russell G.
2014-01-01
Assessments consisting of only a few extended constructed response items (essays) are not typically equated using anchor test designs as there are typically too few essay prompts in each form to allow for meaningful equating. This article explores the idea that output from an automated scoring program designed to measure writing fluency (a common…
Sensitivity of Equated Aggregate Scores to the Treatment of Misbehaving Common Items
ERIC Educational Resources Information Center
Michaelides, Michalis P.
2010-01-01
The delta-plot method (Angoff, 1972) is a graphical technique used in the context of test equating for identifying common items with aberrant changes in their item difficulties across administrations or alternate forms. This brief research report explores the effects on equated aggregate scores when delta-plot outliers are either retained in or…
A Numerical Experiment on the Role of Surface Shear Stress in the Generation of Sound
NASA Technical Reports Server (NTRS)
Shariff, Karim; Wang, Meng; Merriam, Marshal (Technical Monitor)
1996-01-01
The sound generated due to a localized flow over an infinite flat surface is considered. It is known that the unsteady surface pressure, while appearing in a formal solution to the Lighthill equation, does not constitute a source of sound but rather represents the effect of image quadrupoles. The question of whether a similar surface shear stress term constitutes a true source of dipole sound is less settled. Some have boldly assumed it is a true source while others have argued that, like the surface pressure, it depends on the sound field (via an acoustic boundary layer) and is therefore not a true source. A numerical experiment based on the viscous, compressible Navier-Stokes equations was undertaken to investigate the issue. A small region of a wall was oscillated tangentially. The directly computed sound field was found to to agree with an acoustic analogy based calculation which regards the surface shear as an acoustically compact dipole source of sound.
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
Yu, Jingkai; Finley, Russell L
2009-01-01
High-throughput experimental and computational methods are generating a wealth of protein-protein interaction data for a variety of organisms. However, data produced by current state-of-the-art methods include many false positives, which can hinder the analyses needed to derive biological insights. One way to address this problem is to assign confidence scores that reflect the reliability and biological significance of each interaction. Most previously described scoring methods use a set of likely true positives to train a model to score all interactions in a dataset. A single positive training set, however, may be biased and not representative of true interaction space. We demonstrate a method to score protein interactions by utilizing multiple independent sets of training positives to reduce the potential bias inherent in using a single training set. We used a set of benchmark yeast protein interactions to show that our approach outperforms other scoring methods. Our approach can also score interactions across data types, which makes it more widely applicable than many previously proposed methods. We applied the method to protein interaction data from both Drosophila melanogaster and Homo sapiens. Independent evaluations show that the resulting confidence scores accurately reflect the biological significance of the interactions.
Timmermann, Lars; Oehlwein, Christian; Ransmayr, Gerhard; Fröhlich, Holger; Will, Edgar; Schroeder, Hanna; Lauterbach, Thomas; Bauer, Lars; Kassubek, Jan
2017-01-01
To evaluate Parkinson's disease (PD)-associated pain as perceived by the patients (subjective characterization), and how this may change following initiation of rotigotine transdermal patch. SP1058 was a non-interventional study conducted in routine clinical practice in Germany and Austria in patients experiencing PD-associated pain (per the physician's assessment). Data were collected at baseline (ie, before rotigotine initiation) and at a routine visit after ≥25 days (-3 days allowed) of treatment on a maintenance dose of rotigotine (end of study [EoS]). Pain perception was assessed using the 12-item Pain Description List of the validated German Pain Questionnaire (each item ranked 0 = 'not true' to 3 = 'very true'). Primary effectiveness variable: change from baseline to EoS in the sum score of the 4 'affective dimension' items of the Pain Description List. Secondary effectiveness variables: change from baseline to EoS in Unified Parkinson's Disease Rating Scale (UPDRS) II, III, and II+III scores, and Parkinson's Disease Questionnaire (PDQ-8) total score (PD-related quality-of-life). Other variables included scores of the eight 'sensory dimension' items of the Pain Description List. Of 93 enrolled patients (mean [SD] age: 71.1 [9.0] years; male: 48 [52%]), 77 (83%) completed the study, and 70 comprised the full analysis set. The mean (SD) change from baseline in the sum score of the four 'affective dimension' items was -1.3 (2.8) indicating a numerical improvement (baseline: 3.9 [3.4]). In the 'sensory dimension', pain was mostly perceived as 'pulling' at baseline (49/70 [70%]); 'largely true'/'very true'). Numerical improvements were observed in all UPDRS scores (mean [SD] change in UPDRS II+III: -5.3 [10.5]; baseline: 36.0 [15.9]), and in PDQ-8 total score (-2.0 [4.8]; baseline: 10.7 [5.9]). Adverse drug reactions were consistent with dopaminergic stimulation and transdermal administration. The perception of the 'affective dimension' of PD-associated pain numerically improved in patients treated with rotigotine. ClinicalTrials.gov identifier: NCT01606670; https://clinicaltrials.gov/ct2/show/NCT01606670?term=NCT01606670&rank=1.
ERIC Educational Resources Information Center
Green, Samuel B.; Yang, Yanyun
2009-01-01
A method is presented for estimating reliability using structural equation modeling (SEM) that allows for nonlinearity between factors and item scores. Assuming the focus is on consistency of summed item scores, this method for estimating reliability is preferred to those based on linear SEM models and to the most commonly reported estimate of…
Bibok, Maximilian B; Votova, Kristine; Balshaw, Robert F; Lesperance, Mary L; Croteau, Nicole S; Trivedi, Anurag; Morrison, Jaclyn; Sedgwick, Colin; Penn, Andrew M
2018-02-27
To evaluate the performance of a novel triage system for Transient Ischemic Attack (TIA) units built upon an existent clinical prediction rule (CPR) to reduce time to unit arrival, relative to the time of symptom onset, for true TIA and minor stroke patients. Differentiating between true and false TIA/minor stroke cases (mimics) is necessary for effective triage as medical intervention for true TIA/minor stroke is time-sensitive and TIA unit spots are a finite resource. Prospective cohort study design utilizing patient referral data and TIA unit arrival times from a regional fast-track TIA unit on Vancouver Island, Canada, accepting referrals from emergency departments (ED) and general practice (GP). Historical referral cohort (N = 2942) from May 2013-Oct 2014 was triaged using the ABCD2 score; prospective referral cohort (N = 2929) from Nov 2014-Apr 2016 was triaged using the novel system. A retrospective survival curve analysis, censored at 28 days to unit arrival, was used to compare days to unit arrival from event date between cohort patients matched by low (0-3), moderate (4-5) and high (6-7) ABCD2 scores. Survival curve analysis indicated that using the novel triage system, prospectively referred TIA/minor stroke patients with low and moderate ABCD2 scores arrived at the unit 2 and 1 day earlier than matched historical patients, respectively. The novel triage process is associated with a reduction in time to unit arrival from symptom onset for referred true TIA/minor stroke patients with low and moderate ABCD2 scores.
Impact of Measurement Error on Statistical Power: Review of an Old Paradox.
ERIC Educational Resources Information Center
Williams, Richard H.; And Others
1995-01-01
The paradox that a Student t-test based on pretest-posttest differences can attain its greatest power when the difference score reliability is zero was explained by demonstrating that power is not a mathematical function of reliability unless either true score variance or error score variance is constant. (SLD)
Differentiation of Illusory and True Halo in Writing Scores
ERIC Educational Resources Information Center
Lai, Emily R.; Wolfe, Edward W.; Vickers, Daisy
2015-01-01
This report summarizes an empirical study that addresses two related topics within the context of writing assessment--illusory halo and how much unique information is provided by multiple analytic scores. Specifically, we address the issue of whether unique information is provided by analytic scores assigned to student writing, beyond what is…
Radiographic versus clinical extension of Class II carious lesions using an F-speed film.
Kooistra, Scott; Dennison, Joseph B; Yaman, Peter; Burt, Brian A; Taylor, George W
2005-01-01
This study investigated the difference in the apparent radiographic and true clinical extension of Class II carious lesions. Sixty-two lesions in both maxillary and mandibular premolars and molars were radiographed using Insight bitewing film. Class II lesions were scored independently by two masked examiners using an 8-point lesion severity scale. During the restoration process the lesions were dissected in a stepwise fashion from the occlusal aspect. Intraoperative photographs (2x) of the lesions were made, utilizing a novel measurement device in the field as a point of reference. Subsequently, the lesions were all given clinical scores using the same 8-point scale. Statistical analysis showed a significant difference between the true clinical extension of the lesions compared to the radiographic score. "Aggressive" and "Conservative" radiographic diagnoses underestimated the true clinical extent by 0.66 mm and 0.91 mm, respectively. No statistical difference was found between premolars and molars or maxillary and mandibular arches. The results of this study help to define the parameters for making restorative treatment decisions involving Class II carious lesions.
Estimating volume, biomass, and potential emissions of hand-piled fuels
Clinton S. Wright; Cameron S. Balog; Jeffrey W. Kelly
2009-01-01
Dimensions, volume, and biomass were measured for 121 hand-constructed piles composed primarily of coniferous (n = 63) and shrub/hardwood (n = 58) material at sites in Washington and California. Equations using pile dimensions, shape, and type allow users to accurately estimate the biomass of hand piles. Equations for estimating true pile volume from simple geometric...
Predicting True Reading Gains After Remedial Tutoring.
ERIC Educational Resources Information Center
Dahlke, Anita B.
Using selected student variables, an attempt was made to predict retarded readers' true reading gains after remedial tutoring. The independent variables consisted of IQ and subtest scores obtained on the Wechsler Intelligence Scale for Children (WISC), pretutoring reading levels on the individually administered Diagnostic Reading Scales test, age,…
Chen, Ying-Jen; Ho, Meng-Yang; Chen, Kwan-Ju; Hsu, Chia-Fen; Ryu, Shan-Jin
2009-08-01
The aims of the present study were to (i) investigate if traditional Chinese word reading ability can be used for estimating premorbid general intelligence; and (ii) to provide multiple regression equations for estimating premorbid performance on Raven's Standard Progressive Matrices (RSPM), using age, years of education and Chinese Graded Word Reading Test (CGWRT) scores as predictor variables. Four hundred and twenty-six healthy volunteers (201 male, 225 female), aged 16-93 years (mean +/- SD, 41.92 +/- 18.19 years) undertook the tests individually under supervised conditions. Seventy percent of subjects were randomly allocated to the derivation group (n = 296), and the rest to the validation group (n = 130). RSPM score was positively correlated with CGWRT score and years of education. RSPM and CGWRT scores and years of education were also inversely correlated with age, but the declining trend for RSPM performance against age was steeper than that for CGWRT performance. Separate multiple regression equations were derived for estimating RSPM scores using different combinations of age, years of education, and CGWRT score for both groups. The multiple regression coefficient of each equation ranged from 0.71 to 0.80 with the standard error of estimate between 7 and 8 RSPM points. When fitting the data of one group to the equations derived from its counterpart group, the cross-validation multiple regression coefficients ranged from 0.71 to 0.79. There were no significant differences in the 'predicted-obtained' RSPM discrepancies between any equations. The regression equations derived in the present study may provide a basis for estimating premorbid RSPM performance.
Transformation of apparent ocean wave spectra observed from an aircraft sensor platform
NASA Technical Reports Server (NTRS)
Poole, L. R.
1976-01-01
The problem considered was transformation of a unidirectional apparent ocean wave spectrum observed from an aircraft sensor platform into the true spectrum that would be observed from a stationary platform. Spectral transformation equations were developed in terms of the linear wave dispersion relationship and the wave group speed. An iterative solution to the equations was outlined and used to transform reference theoretical apparent spectra for several assumed values of average water depth. Results show that changing the average water depth leads to a redistribution of energy density among the various frequency bands of the transformed spectrum. This redistribution is most severe when much of the energy density is expected, a priori, to reside at relatively low true frequencies.
Non-LTE line formation in a magnetic field. I. Noncoherent scattering and true absorption
DOE Office of Scientific and Technical Information (OSTI.GOV)
Domke, H.; Staude, J.
1973-08-01
The formation of a Zeeman-multiplet by noncoherent scattering and true absorption in a Milne-- Eddington atmosphere is considered assuming a homogeneous magnetic field and complete depolarization of the atomic line levels. The transfer equation for the Stokes parameters is transformed into a scalar integral equation of the Wiener-- Hopf type which is solved by Sobolev's method in closed form. The influence of the magnetic field on the mean scattering number in an infinite medium is discussed. The solution of the line formation problem is obtained for a Planckian source fruction. This solution may be simplified by making the ''finite fieldmore » approximation'', which should be sufficiently accurate for practical purposes. (auth)« less
25 CFR Appendix A to Subpart C - IRR High Priority Project Scoring Matrix
Code of Federal Regulations, 2012 CFR
2012-04-01
... 25 Indians 1 2012-04-01 2011-04-01 true IRR High Priority Project Scoring Matrix A Appendix A to...—IRR High Priority Project Scoring Matrix Score 10 5 3 1 0 Accident and fatality rate for candidate...,000 or less 250,001-500,000 500,001-750,000 Over 750,000. Geographic isolation No external access to...
Sels, Dries; Brosens, Fons
2013-10-01
The equation of motion for the reduced Wigner function of a system coupled to an external quantum system is presented for the specific case when the external quantum system can be modeled as a set of harmonic oscillators. The result is derived from the Wigner function formulation of the Feynman-Vernon influence functional theory. It is shown how the true self-energy for the equation of motion is connected with the influence functional for the path integral. Explicit expressions are derived in terms of the bare Wigner propagator. Finally, we show under which approximations the resulting equation of motion reduces to the Wigner-Boltzmann equation.
ERIC Educational Resources Information Center
Puhan, Gautam
2013-01-01
When a constructed-response test form is reused, raw scores from the two administrations of the form may not be comparable. The solution to this problem requires a rescoring, at the current administration, of examinee responses from the previous administration. The scores from this "rescoring" can be used as an anchor for equating. In…
ERIC Educational Resources Information Center
Wingersky, Marilyn S.; and others
1969-01-01
One in a series of nine articles in a section entitled, "Electronic Computer Program and Accounting Machine Procedures. Research supported in part by contract Nonr-2752(00) from the Office of Naval Research.
Cook, Heather; Brennan, Kathleen; Azziz, Ricardo
2011-01-01
Objective To determine whether assessing the extent of terminal hair growth in a subset of the traditional 9 areas included in the modified Ferriman-Gallwey (mFG) score can serve as a simpler predictor of total body hirsutism when compared to the full scoring system, and to determine if this new model can accurately distinguish hirsute from non-hirsute women. Design Cross-sectional analysis Setting Two tertiary care academic referral centers. Patients 1951 patients presenting for symptoms of androgen excess. Interventions History and physical examination, including mFG score. Main Outcome Measures Total body hirsutism. Results A regression model using all nine body areas indicated that the combination of upper abdomen, lower abdomen and chin was the best predictor of the total full mFG score. Using this subset of three body areas is accurate in distinguishing true hirsute from non-hirsute women when defining true hirsutism as mFG>7. Conclusion Scoring terminal hair growth only on the chin and abdomen can serve as a simple, yet reliable predictor of total body hirsutism when compared to full body scoring using the traditional mFG system. PMID:21924716
Test Score Equating Using a Mini-Version Anchor and a Midi Anchor: A Case Study Using SAT[R] Data
ERIC Educational Resources Information Center
Liu, Jinghua; Sinharay, Sandip; Holland, Paul W.; Curley, Edward; Feigenbaum, Miriam
2011-01-01
This study explores an anchor that is different from the traditional miniature anchor in test score equating. In contrast to a traditional "mini" anchor that has the same spread of item difficulties as the tests to be equated, the studied anchor, referred to as a "midi" anchor (Sinharay & Holland), has a smaller spread of…
ERIC Educational Resources Information Center
Liu, Jinghua; Sinharay, Sandip; Holland, Paul W.; Feigenbaum, Miriam; Curley, Edward
2009-01-01
This study explores the use of a different type of anchor, a "midi anchor", that has a smaller spread of item difficulties than the tests to be equated, and then contrasts its use with the use of a "mini anchor". The impact of different anchors on observed score equating were evaluated and compared with respect to systematic…
Effects of Differential Item Functioning on Examinees' Test Performance and Reliability of Test
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; Zhang, Jinming
2017-01-01
Simulations were conducted to examine the effect of differential item functioning (DIF) on measurement consequences such as total scores, item response theory (IRT) ability estimates, and test reliability in terms of the ratio of true-score variance to observed-score variance and the standard error of estimation for the IRT ability parameter. The…
Standard Errors of Estimated Latent Variable Scores with Estimated Structural Parameters
ERIC Educational Resources Information Center
Hoshino, Takahiro; Shigemasu, Kazuo
2008-01-01
The authors propose a concise formula to evaluate the standard error of the estimated latent variable score when the true values of the structural parameters are not known and must be estimated. The formula can be applied to factor scores in factor analysis or ability parameters in item response theory, without bootstrap or Markov chain Monte…
Relationships of Measurement Error and Prediction Error in Observed-Score Regression
ERIC Educational Resources Information Center
Moses, Tim
2012-01-01
The focus of this paper is assessing the impact of measurement errors on the prediction error of an observed-score regression. Measures are presented and described for decomposing the linear regression's prediction error variance into parts attributable to the true score variance and the error variances of the dependent variable and the predictor…
On a modified form of navier-stokes equations for three-dimensional flows.
Venetis, J
2015-01-01
A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces.
On a Modified Form of Navier-Stokes Equations for Three-Dimensional Flows
Venetis, J.
2015-01-01
A rephrased form of Navier-Stokes equations is performed for incompressible, three-dimensional, unsteady flows according to Eulerian formalism for the fluid motion. In particular, we propose a geometrical method for the elimination of the nonlinear terms of these fundamental equations, which are expressed in true vector form, and finally arrive at an equivalent system of three semilinear first order PDEs, which hold for a three-dimensional rectangular Cartesian coordinate system. Next, we present the related variational formulation of these modified equations as well as a general type of weak solutions which mainly concern Sobolev spaces. PMID:25918743
ERIC Educational Resources Information Center
Allalouf, Avi
2007-01-01
There is significant potential for error in long production processes that consist of sequential stages, each of which is heavily dependent on the previous stage, such as the SER (Scoring, Equating, and Reporting) process. Quality control procedures are required in order to monitor this process and to reduce the number of mistakes to a minimum. In…
Cross Validated Temperament Scale Validities Computed Using Profile Similarity Metrics
2017-04-27
true at both the item and the scale level. 6 Moreover, the correlation between conventional scores and distance scores for these types of scales...have a perfect negative correlation , r = -1.00. From this perspective, conventional and distance scores are completely redundant. Therefore, we argue... correlation between each respondent’s rating profile and the scale key: shape-scores = rx,k. 2. Rating elevation difference, which is computed as the
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2017-01-01
Student Growth Percentiles (SGPs) increasingly are being used in the United States for inferences about student achievement growth and educator effectiveness. Emerging research has indicated that SGPs estimated from observed test scores have large measurement errors. As such, little is known about "true" SGPs, which are defined in terms…
Neuromuscular Strain Increases Symptom Intensity in Chronic Fatigue Syndrome
Rowe, Peter C.; Fontaine, Kevin R.; Lauver, Megan; Jasion, Samantha E.; Marden, Colleen L.; Moni, Malini; Thompson, Carol B.; Violand, Richard L.
2016-01-01
Chronic fatigue syndrome (CFS) is a complex, multisystem disorder that can be disabling. CFS symptoms can be provoked by increased physical or cognitive activity, and by orthostatic stress. In preliminary work, we noted that CFS symptoms also could be provoked by application of longitudinal neural and soft tissue strain to the limbs and spine of affected individuals. In this study we measured the responses to a straight leg raise neuromuscular strain maneuver in individuals with CFS and healthy controls. We randomly assigned 60 individuals with CFS and 20 healthy controls to either a 15 minute period of passive supine straight leg raise (true neuromuscular strain) or a sham straight leg raise. The primary outcome measure was the symptom intensity difference between the scores during and 24 hours after the study maneuver compared to baseline. Fatigue, body pain, lightheadedness, concentration difficulties, and headache scores were measured individually on a 0–10 scale, and summed to create a composite symptom score. Compared to individuals with CFS in the sham strain group, those with CFS in the true strain group reported significantly increased body pain (P = 0.04) and concentration difficulties (P = 0.02) as well as increased composite symptom scores (all P = 0.03) during the maneuver. After 24 hours, the symptom intensity differences were significantly greater for the CFS true strain group for the individual symptom of lightheadedness (P = 0.001) and for the composite symptom score (P = 0.005). During and 24 hours after the exposure to the true strain maneuver, those with CFS had significantly higher individual and composite symptom intensity changes compared to the healthy controls. We conclude that a longitudinal strain applied to the nerves and soft tissues of the lower limb is capable of increasing symptom intensity in individuals with CFS for up to 24 hours. These findings support our preliminary observations that increased mechanical sensitivity may be a contributor to the provocation of symptoms in this disorder. PMID:27428358
Reconstruction of Twist Torque in Main Parachute Risers
NASA Technical Reports Server (NTRS)
Day, Joshua D.
2015-01-01
The reconstruction of twist torque in the Main Parachute Risers of the Capsule Parachute Assembly System (CPAS) has been successfully used to validate CPAS Model Memo conservative twist torque equations. Reconstruction of basic, one degree of freedom drop tests was used to create a functional process for the evaluation of more complex, rigid body simulation. The roll, pitch, and yaw of the body, the fly-out angles of the parachutes, and the relative location of the parachutes to the body are inputs to the torque simulation. The data collected by the Inertial Measurement Unit (IMU) was used to calculate the true torque. The simulation then used photogrammetric and IMU data as inputs into the Model Memo equations. The results were then compared to the true torque results to validate the Model Memo equations. The Model Memo parameters were based off of steel risers and the parameters will need to be re-evaluated for different materials. Photogrammetric data was found to be more accurate than the inertial data in accounting for the relative rotation between payload and cluster. The Model Memo equations were generally a good match and when not matching were generally conservative.
Program helps quickly calculate deviated well path
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gardner, M.P.
1993-11-22
A BASIC computer program quickly calculates the angle and measured depth of a simple directional well given only the true vertical depth and total displacement of the target. Many petroleum engineers and geologists need a quick, easy method to calculate the angle and measured depth necessary to reach a target in a proposed deviated well bore. Too many of the existing programs are large and require much input data. The drilling literature is full of equations and methods to calculate the course of well paths from surveys taken after a well is drilled. Very little information, however, covers how tomore » calculate well bore trajectories for proposed wells from limited data. Furthermore, many of the equations are quite complex and difficult to use. A figure lists a computer program with the equations to calculate the well bore trajectory necessary to reach a given displacement and true vertical depth (TVD) for a simple build plant. It can be run on an IBM compatible computer with MS-DOS version 5 or higher, QBasic, or any BASIC that does no require line numbers. QBasic 4.5 compiler will also run the program. The equations are based on conventional geometry and trigonometry.« less
ERIC Educational Resources Information Center
Liu, Jinghua; Zu, Jiyun; Curley, Edward; Carey, Jill
2014-01-01
The purpose of this study is to investigate the impact of discrete anchor items versus passage-based anchor items on observed score equating using empirical data.This study compares an "SAT"® critical reading anchor that contains more discrete items proportionally, compared to the total tests to be equated, to another anchor that…
An Argument Against Augmenting the Lagrangean for Nonholonomic Systems
NASA Technical Reports Server (NTRS)
Roithmayr, Carlos M.; Hodges, Dewey H.
2009-01-01
Although it is known that correct dynamical equations of motion for a nonholonomic system cannot be obtained from a Lagrangean that has been augmented with a sum of the nonholonomic constraint equations weighted with multipliers, previous publications suggest otherwise. An example has been proposed in support of augmentation and purportedly demonstrates that an accepted method fails to produce correct equations of motion whereas augmentation leads to correct equations; this paper shows that in fact the opposite is true. The correct equations, previously discounted on the basis of a flawed application of the Newton-Euler method, are verified by using Kane's method and a new approach to determining the directions of constraint forces. A correct application of the Newton-Euler method reproduces valid equations.
Endovascular treatment of ruptured true posterior communicating artery aneurysms.
Yang, Yonglin; Su, Wandong; Meng, Qinghai
2015-01-01
Although true posterior communicating artery (PCoA) aneurysms are rare, they are of vital importance. We reviewed 9 patients with this fatal disease, who were treated with endovascular embolization, and discussed the meaning of endovascular embolization for the treatment of true PCoA aneurysms. From September 2006 to May 2012, 9 patients with digital substraction angiography (DSA) confirmed true PCoA aneurysms were treated with endovascular embolization. Patients were followed-up with a minimal duration of 17 months and assessed by Glasgow Outcome Scale (GOS) score. All the patients presented with spontaneous subarachnoid hemorrhage from the ruptured aneurysms. The ratio of males to females was 1:2, and the average age of onset was 59.9 (ranging from 52 to 72) years. The preoperative Hunt-Hess grade scores were I to III. All patients had recovered satisfactorily. No permanent neurological deficits were left. Currently, endovascular embolization can be recommended as the top choice for the treatment of most true PCoA aneurysms, due to its advanced technique, especially the application of the stent-assisted coiling technique, combined with its advantage of mininal invasiveness and quick recovery. However, the choice of treatment methods should be based on the clinical and anatomical characteristics of the aneurysm and the skillfulness of the surgeon.
McFarland, Daniel C; Holland, Jimmie; Holcombe, Randall F
2015-07-01
The demand for hematologists and oncologists is not being met. We hypothesized that an inpatient hematology-oncology ward rotation would increase residents' interest. Potential reasons mitigating interest were explored and included differences in physician distress, empathy, resilience, and patient death experiences. Agreement with the statement "I am interested in pursuing a career/fellowship in hematology and oncology" was rated by residents before and after a hematology-oncology rotation, with 0 = not true at all, 1 = rarely true, 2 = sometimes true, 3 = often true, and 4 = true nearly all the time. House staff rotating on a hematology-oncology service from November 2013 to October 2014 also received questionnaires before and after their rotations containing the Connors-Davidson Resilience Scale, the Impact of Events Scale-Revised, the Interpersonal Reactivity Index, demographic information, and number of dying patients cared for and if a sense of meaning was derived from that experience. Fifty-six residents completed both before- and after-rotation questionnaires (response rate, 58%). The mean interest score was 1.43 initially and decreased to 1.24 after the rotation (P = .301). Female residents' mean score was 1.13 initially and dropped to 0.81 after the rotation (P = .04). Male residents' mean score was 1.71 initially and 1.81 after the rotation (P = .65). Decreased hematology-oncology interest correlated with decreased empathy; male interest decrease correlated with decreased resilience. An inpatient hematology-oncology ward rotation does not lead to increased interest and, for some residents, may lead to decreased interest in the field. Encouraging outpatient hematology-oncology rotations and the cultivation of resilience, empathy, and meaning regarding death experiences may increase resident interest. Copyright © 2015 by American Society of Clinical Oncology.
The equations of motion of a secularly precessing elliptical orbit
NASA Astrophysics Data System (ADS)
Casotto, S.; Bardella, M.
2013-01-01
The equations of motion of a secularly precessing ellipse are developed using time as the independent variable. The equations are useful when integrating numerically the perturbations about a reference trajectory which is subject to secular perturbations in the node, the argument of pericentre and the mean motion. Usually this is done in connection with Encke's method to ensure minimal rectification frequency. Similar equations are already available in the literature, but they are either given based on the true anomaly as the independent variable or in mixed mode with respect to time through the use of a supporting equation to track the anomaly. The equations developed here form a complete and independent set of six equations in time. Reformulations both of Escobal's and Kyner and Bennett's equations are also provided which lead to a more concise form.
A general equation to obtain multiple cut-off scores on a test from multinomial logistic regression.
Bersabé, Rosa; Rivas, Teresa
2010-05-01
The authors derive a general equation to compute multiple cut-offs on a total test score in order to classify individuals into more than two ordinal categories. The equation is derived from the multinomial logistic regression (MLR) model, which is an extension of the binary logistic regression (BLR) model to accommodate polytomous outcome variables. From this analytical procedure, cut-off scores are established at the test score (the predictor variable) at which an individual is as likely to be in category j as in category j+1 of an ordinal outcome variable. The application of the complete procedure is illustrated by an example with data from an actual study on eating disorders. In this example, two cut-off scores on the Eating Attitudes Test (EAT-26) scores are obtained in order to classify individuals into three ordinal categories: asymptomatic, symptomatic and eating disorder. Diagnoses were made from the responses to a self-report (Q-EDD) that operationalises DSM-IV criteria for eating disorders. Alternatives to the MLR model to set multiple cut-off scores are discussed.
Wakabayashi, Hisao; Sano, Takanori; Yachida, Shinichi; Okano, Keiichi; Izuishi, Kunihiko; Suzuki, Yasuyuki
2007-10-01
The goal of this study was to validate the usefulness of risk assessment scoring systems for a surgical audit in elective digestive surgery for elderly patients. The validated scoring systems used were the Physiological and Operative Severity Score for enUmeration of Mortality and morbidity (POSSUM) and the Portsmouth predictor equation for mortality (P-POSSUM). This study involved 153 consecutive patients aged 75 years and older who underwent elective gastric or colorectal surgery between July 2004 and June 2006. A retrospective analysis was performed on data collected prior to each surgery. The predicted mortality and morbidity risks were calculated using each of the scoring systems and were used to obtain the observed/predicted (O/E) mortality and morbidity ratios. New logistic regression equations for morbidity and mortality were then calculated using the scores from the POSSUM system and applied retrospectively. The O/E ratio for morbidity obtained from POSSUM score was 0.23. The O/E ratios for mortality from the POSSUM score and the P-POSSUM were 0.15 and 0.38, respectively. Utilizing the new equations using scores from the POSSUM, the O/E ratio increased to 0.88. Both the POSSUM and P-POSSUM over-predicted the morbidity and mortality in elective gastrointestinal surgery for malignant tumors in elderly patients. However, if a surgical unit makes appropriate calculations using its own patient series and updates these equations, the POSSUM system can be useful in the risk assessment for surgery in elderly patients.
Extension of the lod score: the mod score.
Clerget-Darpoux, F
2001-01-01
In 1955 Morton proposed the lod score method both for testing linkage between loci and for estimating the recombination fraction between them. If a disease is controlled by a gene at one of these loci, the lod score computation requires the prior specification of an underlying model that assigns the probabilities of genotypes from the observed phenotypes. To address the case of linkage studies for diseases with unknown mode of inheritance, we suggested (Clerget-Darpoux et al., 1986) extending the lod score function to a so-called mod score function. In this function, the variables are both the recombination fraction and the disease model parameters. Maximizing the mod score function over all these parameters amounts to maximizing the probability of marker data conditional on the disease status. Under the absence of linkage, the mod score conforms to a chi-square distribution, with extra degrees of freedom in comparison to the lod score function (MacLean et al., 1993). The mod score is asymptotically maximum for the true disease model (Clerget-Darpoux and Bonaïti-Pellié, 1992; Hodge and Elston, 1994). Consequently, the power to detect linkage through mod score will be highest when the space of models where the maximization is performed includes the true model. On the other hand, one must avoid overparametrization of the model space. For example, when the approach is applied to affected sibpairs, only two constrained disease model parameters should be used (Knapp et al., 1994) for the mod score maximization. It is also important to emphasize the existence of a strong correlation between the disease gene location and the disease model. Consequently, there is poor resolution of the location of the susceptibility locus when the disease model at this locus is unknown. Of course, this is true regardless of the statistics used. The mod score may also be applied in a candidate gene strategy to model the potential effect of this gene in the disease. Since, however, it ignores the information provided both by disease segregation and by linkage disequilibrium between the marker alleles and the functional disease alleles, its power of discrimination between genetic models is weak. The MASC method (Clerget-Darpoux et al., 1988) has been designed to address more efficiently the objectives of a candidate gene approach.
ERIC Educational Resources Information Center
Ebuoh, Casmir N.; Ezeudu, S. A.
2015-01-01
The study investigated the effects of scoring by section, use of independent scorers and conventional patterns on scorer reliability in Biology essay tests. It was revealed from literature review that conventional pattern of scoring all items at a time in essay tests had been criticized for not being reliable. The study was true experimental study…
ERIC Educational Resources Information Center
DeMars, Christine E.
2009-01-01
The Mantel-Haenszel (MH) and logistic regression (LR) differential item functioning (DIF) procedures have inflated Type I error rates when there are large mean group differences, short tests, and large sample sizes.When there are large group differences in mean score, groups matched on the observed number-correct score differ on true score,…
Dynamical systems theory for nonlinear evolution equations.
Choudhuri, Amitava; Talukdar, B; Das, Umapada
2010-09-01
We observe that the fully nonlinear evolution equations of Rosenau and Hymann, often abbreviated as K(n,m) equations, can be reduced to Hamiltonian form only on a zero-energy hypersurface belonging to some potential function associated with the equations. We treat the resulting Hamiltonian equations by the dynamical systems theory and present a phase-space analysis of their stable points. The results of our study demonstrate that the equations can, in general, support both compacton and soliton solutions. For the K(2,2) and K(3,3) cases one type of solutions can be obtained from the other by continuously varying a parameter of the equations. This is not true for the K(3,2) equation for which the parameter can take only negative values. The K(2,3) equation does not have any stable point and, in the language of mechanics, represents a particle moving with constant acceleration.
ERIC Educational Resources Information Center
Liu, Jinghua; Sinharay, Sandip; Holland, Paul; Feigenbaum, Miriam; Curley, Edward
2011-01-01
Two different types of anchors are investigated in this study: a mini-version anchor and an anchor that has a less spread of difficulty than the tests to be equated. The latter is referred to as a midi anchor. The impact of these two different types of anchors on observed score equating are evaluated and compared with respect to systematic error…
Investigating Supervisory Relationships and Therapeutic Alliances Using Structural Equation Modeling
ERIC Educational Resources Information Center
DePue, Mary Kristina; Lambie, Glenn W.; Liu, Ren; Gonzalez, Jessica
2016-01-01
The authors used structural equation modeling to examine the contribution of supervisees' supervisory relationship levels to therapeutic alliance (TA) scores with their clients in practicum. Results showed that supervisory relationship scores positively contributed to the TA. Client and counselor ratings of the TA also differed.
True and false memories, parietal cortex, and confidence judgments
Urgolites, Zhisen J.; Smith, Christine N.
2015-01-01
Recent studies have asked whether activity in the medial temporal lobe (MTL) and the neocortex can distinguish true memory from false memory. A frequent complication has been that the confidence associated with correct memory judgments (true memory) is typically higher than the confidence associated with incorrect memory judgments (false memory). Accordingly, it has often been difficult to know whether a finding is related to memory confidence or memory accuracy. In the current study, participants made recognition memory judgments with confidence ratings in response to previously studied scenes and novel scenes. The left hippocampus and 16 other brain regions distinguished true and false memories when confidence ratings were different for the two conditions. Only three regions (all in the parietal cortex) distinguished true and false memories when confidence ratings were equated. These findings illustrate the utility of taking confidence ratings into account when identifying brain regions associated with true and false memories. Neural correlates of true and false memories are most easily interpreted when confidence ratings are similar for the two kinds of memories. PMID:26472645
[Computer Program PEDAGE -- MARKTF-M5-F4.
ERIC Educational Resources Information Center
Toronto Univ. (Ontario). Dept. of Geology.
The computer program MARKTF-M5, written in FORTRAN IV, scores tests (consisting of true-or-false statement about concepts or facts) by comparing the list of true or false values prepared by the instructor with those from the students. The output consists of information to the supervisor about the performance of the students, primarily for his…
Height-diameter equations for young-growth red fir in California and southern Oregon
K. Leroy Dolph
1989-01-01
Total tree height of young-growth red fir can be estimated from the relation of total tree height to diameter outside bark at breast height (DOB). Total tree heights and corresponding diameters were obtained from stem analyses of 562 trees distributed across 56 sampling locations in the true fir forest type of California and Oregon. The resulting equations can predict...
Jeon, Ju-Hyun; Yoon, Jeungwon; Cho, Chong-Kwan; Jung, In-Chul; Kim, Sungchul; Lee, Suk-Hoon; Yoo, Hwa-Seung
2015-05-01
The aim of this study is to evaluate the efficacy and safety of acupuncture for radioactive iodine (RAI)-induced anorexia in thyroid cancer patients. Fourteen thyroid cancer patients with RAI-induced anorexia were randomized to a true acupuncture or sham acupuncture group. Both groups were given 6 true or sham acupuncture treatments in 2 weeks. Outcome measures included the change of the Functional Assessment of Anorexia and Cachexia Treatment (FAACT; Anorexia/Cachexia Subscale [ACS], Functional Assessment of Cancer Therapy-General [FACT-G]), Visual Analogue Scale (VAS), weight, body mass index (BMI), ACTH, and cortisol levels. The mean FAACT ACS scores of the true and sham acupuncture groups increased from baseline to exit in intention-to-treat (ITT) and per protocol (PP) analyses; the true acupuncture group showed higher increase but with no statistical significance. Between groups, from baseline to the last treatment, statistically significant differences were found in ITT analysis of the Table of Index (TOI) score (P = .034) and in PP analysis of the TOI (P = .016), FACT-G (P = .045), FAACT (P = .037) scores. There was no significant difference in VAS, weight, BMI, ACTH, and cortisol level changes between groups. Although the current study is based on a small sample of participants, our findings support the safety and potential use of acupuncture for RAI-induced anorexia and quality of life in thyroid cancer patients. © The Author(s) 2015.
ERIC Educational Resources Information Center
Ojerinde, Dibu; Popoola, Omokunmi; Onyeneho, Patrick; Egberongbe, Aminat
2016-01-01
Statistical procedure used in adjusting test score difficulties on test forms is known as "equating". Equating makes it possible for various test forms to be used interchangeably. In terms of where the equating method fits in the assessment cycle, there are pre-equating and post-equating methods. The major benefits of pre-equating, when…
ERIC Educational Resources Information Center
Store, Davie
2013-01-01
The impact of particular types of context effects on actual scores is less understood although there has been some research carried out regarding certain types of context effects under the nonequivalent anchor test (NEAT) design. In addition, the issue of the impact of item context effects on scores has not been investigated extensively when item…
Díaz-Morales, Juan F; Randler, Christoph; Arrona-Palacios, Arturo; Adan, Ana
2017-01-01
The aim of this study was to provide validity for the Spanish version of the Morningness-Eveningness-Stability Scale - improved (MESSi), a novel evolved assessment of circadian typology which considers the subjective phase and amplitude by morning affect (MA), eveningness (EV) and distinctness (DI; subjective amplitude) sub-scales. Convergence validity of the MESSi with the reduced Morningness-Eveningness Questionnaire (rMEQ) and relationships with the General Health Questionnaire (GHQ-12) and sensitivity to reward and punishment (SR and SP) were analyzed. Two different Spanish samples, young undergraduate students (n = 891, 18-30 years) and adult workers (n = 577, 31-65 years) participated in this study. Exploratory structural equation modeling (ESEM) of MESSi displayed acceptable fit of a three-factors measurement model. Percentiles of the MA, EV and DI sub-scales were obtained for students and adults. The MESSi showed good convergence validity with the rMEQ scores, with a higher correlation coefficient between MA, EV and lower with DI sub-scales. In both, young students and adult workers, MA was negatively related with the GHQ-12 and SP, but the percentage of explained variance (6% and 3%) was lower than the positive correlations between DI, the GHQ-12 and SP (20% and 13%). Morning types presented higher MA and lower EV scores than the other two typologies in both students and adult workers, whereas only differences in DI were found among students (lowest in evening type). Candidates to psychological symptoms and mental disorders ("true cases"), with the clinical cut-off criteria of the GHQ-12, showed lower MA and higher DI in students, whereas only DI was higher for "true cases" among adults. These results supported that subjective amplitude is a factor related to, but also differentiated of, morningness-eveningness (preferred time for a certain activity). The measure of amplitude might be more important than circadian phase in health consequences.
Central role of the observable electric potential in transport equations.
Garrido, J; Compañ, V; López, M L
2001-07-01
Nonequilibrium systems are usually studied in the framework of transport equations that involve the true electric potential (TEP), a nonobservable variable. Nevertheless another electric potential, the observable electric potential (OEP), may be defined to construct a useful set of transport equations. In this paper several basic characteristics of the OEP are deduced and emphasized: (i) the OEP distribution depends on thermodynamic state of the solution, (ii) the observable equations have a reference value for all other transport equations, (iii) the bridge that connects the OEP with a certain TEP is usually defined by the ion activity coefficient, (iv) the electric charge density is a nonobservable variable, and (v) the OEP formulation constitutes a natural model for studying the fluxes in membrane systems.
Learning partial differential equations via data discovery and sparse optimization
NASA Astrophysics Data System (ADS)
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization.
Schaeffer, Hayden
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection.
Learning partial differential equations via data discovery and sparse optimization
2017-01-01
We investigate the problem of learning an evolution equation directly from some given data. This work develops a learning algorithm to identify the terms in the underlying partial differential equations and to approximate the coefficients of the terms only using data. The algorithm uses sparse optimization in order to perform feature selection and parameter estimation. The features are data driven in the sense that they are constructed using nonlinear algebraic equations on the spatial derivatives of the data. Several numerical experiments show the proposed method's robustness to data noise and size, its ability to capture the true features of the data, and its capability of performing additional analytics. Examples include shock equations, pattern formation, fluid flow and turbulence, and oscillatory convection. PMID:28265183
K. Leroy Dolph
1989-01-01
Inside bark diameters of young-growth red fir can be estimated from the relationship of inside bark diameter 10 outside bark diameter at breast height. Inside and outside bark diameter were obtained from stem analyses of 562 trees distributed across 56 sampling locations in the true fir forest type of California and southern Oregon. The resulting equation can predict...
NASA Astrophysics Data System (ADS)
Bonacci, Ognjen; Željković, Ivana; Trogrlić, Robert Šakić; Milković, Janja
2013-10-01
Differences between true mean daily, monthly and annual air temperatures T0 [Eq. (1)] and temperatures calculated with three different equations [(2), (3) and (4)] (commonly used in climatological practice) were investigated at three main meteorological Croatian stations from 1 January 1999 to 31 December 2011. The stations are situated in the following three climatically distinct areas: (1) Zagreb-Grič (mild continental climate), (2) Zavižan (cold mountain climate), and (3) Dubrovnik (hot Mediterranean climate). T1 [Eq. (2)] and T3 [Eq. (4)] mean temperatures are defined by the algorithms based on the weighted means of temperatures measured at irregularly spaced, yet fixed hours. T2 [Eq. (3)] is the mean temperature defined as the average of daily maximum and minimum temperature. The equation as well as the time of observations used introduces a bias into mean temperatures. The largest differences occur for mean daily temperatures. The calculated daily difference value from all three equations and all analysed stations varies from -3.73 °C to +3.56 °C, from -1.39 °C to +0.79 °C for monthly differences and from -0.76 °C to +0.30 °C for annual differences.
An Evaluation of Three Approximate Item Response Theory Models for Equating Test Scores.
ERIC Educational Resources Information Center
Marco, Gary L.; And Others
Three item response models were evaluated for estimating item parameters and equating test scores. The models, which approximated the traditional three-parameter model, included: (1) the Rasch one-parameter model, operationalized in the BICAL computer program; (2) an approximate three-parameter logistic model based on coarse group data divided…
Bi-Factor MIRT Observed-Score Equating for Mixed-Format Tests
ERIC Educational Resources Information Center
Lee, Guemin; Lee, Won-Chan
2016-01-01
The main purposes of this study were to develop bi-factor multidimensional item response theory (BF-MIRT) observed-score equating procedures for mixed-format tests and to investigate relative appropriateness of the proposed procedures. Using data from a large-scale testing program, three types of pseudo data sets were formulated: matched samples,…
Computational technique for stepwise quantitative assessment of equation correctness
NASA Astrophysics Data System (ADS)
Othman, Nuru'l Izzah; Bakar, Zainab Abu
2017-04-01
Many of the computer-aided mathematics assessment systems that are available today possess the capability to implement stepwise correctness checking of a working scheme for solving equations. The computational technique for assessing the correctness of each response in the scheme mainly involves checking the mathematical equivalence and providing qualitative feedback. This paper presents a technique, known as the Stepwise Correctness Checking and Scoring (SCCS) technique that checks the correctness of each equation in terms of structural equivalence and provides quantitative feedback. The technique, which is based on the Multiset framework, adapts certain techniques from textual information retrieval involving tokenization, document modelling and similarity evaluation. The performance of the SCCS technique was tested using worked solutions on solving linear algebraic equations in one variable. 350 working schemes comprising of 1385 responses were collected using a marking engine prototype, which has been developed based on the technique. The results show that both the automated analytical scores and the automated overall scores generated by the marking engine exhibit high percent agreement, high correlation and high degree of agreement with manual scores with small average absolute and mixed errors.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Benedetti, R. L.; Lords, L. V.; Kiser, D. M.
1978-02-01
The SCORE-EVET code was developed to study multidimensional transient fluid flow in nuclear reactor fuel rod arrays. The conservation equations used were derived by volume averaging the transient compressible three-dimensional local continuum equations in Cartesian coordinates. No assumptions associated with subchannel flow have been incorporated into the derivation of the conservation equations. In addition to the three-dimensional fluid flow equations, the SCORE-EVET code ocntains: (a) a one-dimensional steady state solution scheme to initialize the flow field, (b) steady state and transient fuel rod conduction models, and (c) comprehensive correlation packages to describe fluid-to-fuel rod interfacial energy and momentum exchange. Velocitymore » and pressure boundary conditions can be specified as a function of time and space to model reactor transient conditions such as a hypothesized loss-of-coolant accident (LOCA) or flow blockage.« less
Sherlock Holmes and child psychopathology assessment approaches: the case of the false-positive.
Jensen, P S; Watanabe, H
1999-02-01
To explore the relative value of various methods of assessing childhood psychopathology, the authors compared 4 groups of children: those who met criteria for one or more DSM diagnoses and scored high on parent symptom checklists, those who met psychopathology criteria on either one of these two assessment approaches alone, and those who met no psychopathology assessment criterion. Parents of 201 children completed the Child Behavior Checklist (CBCL), after which children and parents were administered the Diagnostic Interview Schedule for Children (version 2.1). Children and parents also completed other survey measures and symptom report inventories. The 4 groups of children were compared against "external validators" to examine the merits of "false-positive" and "false-negative" cases. True-positive cases (those that met DSM criteria and scored high on the CBCL) differed significantly from the true-negative cases on most external validators. "False-positive" and "false-negative" cases had intermediate levels of most risk factors and external validators. "False-positive" cases were not normal per se because they scored significantly above the true-negative group on a number of risk factors and external validators. A similar but less marked pattern was noted for "false-negatives." Findings call into question whether cases with high symptom checklist scores despite no formal diagnoses should be considered "false-positive." Pending the availability of robust markers for mental illness, researchers and clinicians must resist the tendency to reify diagnostic categories or to engage in arcane debates about the superiority of one assessment approach over another.
ERIC Educational Resources Information Center
Bliss, Leonard B.
The aim of this study was to show that the superiority of corrected-for-guessing scores over number right scores as true score estimates depends on the ability of examinees to recognize situations where they can eliminate one or more alternatives as incorrect and to omit items where they would only be guessing randomly. Previous investigations…
Standard Errors of Equating Differences: Prior Developments, Extensions, and Simulations
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2011-01-01
The purpose of this article was to extend the use of standard errors for equated score differences (SEEDs) to traditional equating functions. The SEEDs are described in terms of their original proposal for kernel equating functions and extended so that SEEDs for traditional linear and traditional equipercentile equating functions can be computed.…
Exploring a Source of Uneven Score Equity across the Test Score Range
ERIC Educational Resources Information Center
Huggins-Manley, Anne Corinne; Qiu, Yuxi; Penfield, Randall D.
2018-01-01
Score equity assessment (SEA) refers to an examination of population invariance of equating across two or more subpopulations of test examinees. Previous SEA studies have shown that score equity may be present for examinees scoring at particular test score ranges but absent for examinees scoring at other score ranges. No studies to date have…
An Investigation of the Sampling Distributions of Equating Coefficients.
ERIC Educational Resources Information Center
Baker, Frank B.
1996-01-01
Using the characteristic curve method for dichotomously scored test items, the sampling distributions of equating coefficients were examined. Simulations indicate that for the equating conditions studied, the sampling distributions of the equating coefficients appear to have acceptable characteristics, suggesting confidence in the values obtained…
ERIC Educational Resources Information Center
Yang, Chongming; Nay, Sandra; Hoyle, Rick H.
2010-01-01
Lengthy scales or testlets pose certain challenges for structural equation modeling (SEM) if all the items are included as indicators of a latent construct. Three general approaches to modeling lengthy scales in SEM (parceling, latent scoring, and shortening) have been reviewed and evaluated. A hypothetical population model is simulated containing…
ERIC Educational Resources Information Center
Zu, Jiyun; Yuan, Ke-Hai
2012-01-01
In the nonequivalent groups with anchor test (NEAT) design, the standard error of linear observed-score equating is commonly estimated by an estimator derived assuming multivariate normality. However, real data are seldom normally distributed, causing this normal estimator to be inconsistent. A general estimator, which does not rely on the…
Using tree diversity to compare phylogenetic heuristics.
Sul, Seung-Jin; Matthews, Suzanne; Williams, Tiffani L
2009-04-29
Evolutionary trees are family trees that represent the relationships between a group of organisms. Phylogenetic heuristics are used to search stochastically for the best-scoring trees in tree space. Given that better tree scores are believed to be better approximations of the true phylogeny, traditional evaluation techniques have used tree scores to determine the heuristics that find the best scores in the fastest time. We develop new techniques to evaluate phylogenetic heuristics based on both tree scores and topologies to compare Pauprat and Rec-I-DCM3, two popular Maximum Parsimony search algorithms. Our results show that although Pauprat and Rec-I-DCM3 find the trees with the same best scores, topologically these trees are quite different. Furthermore, the Rec-I-DCM3 trees cluster distinctly from the Pauprat trees. In addition to our heatmap visualizations of using parsimony scores and the Robinson-Foulds distance to compare best-scoring trees found by the two heuristics, we also develop entropy-based methods to show the diversity of the trees found. Overall, Pauprat identifies more diverse trees than Rec-I-DCM3. Overall, our work shows that there is value to comparing heuristics beyond the parsimony scores that they find. Pauprat is a slower heuristic than Rec-I-DCM3. However, our work shows that there is tremendous value in using Pauprat to reconstruct trees-especially since it finds identical scoring but topologically distinct trees. Hence, instead of discounting Pauprat, effort should go in improving its implementation. Ultimately, improved performance measures lead to better phylogenetic heuristics and will result in better approximations of the true evolutionary history of the organisms of interest.
SCORE should be preferred to Framingham to predict cardiovascular death in French population.
Marchant, Ivanny; Boissel, Jean-Pierre; Kassaï, Behrouz; Bejan, Theodora; Massol, Jacques; Vidal, Chrystelle; Amsallem, Emmanuel; Naudin, Florence; Galan, Pilar; Czernichow, Sébastien; Nony, Patrice; Gueyffier, François
2009-10-01
Numerous studies have examined the validity of available scores to predict the absolute cardiovascular risk. We developed a virtual population based on data representative of the French population and compared the performances of the two most popular risk equations to predict cardiovascular death: Framingham and SCORE. A population was built based on official French demographic statistics and summarized data from representative observational studies. The 10-year coronary and cardiovascular death risk and their ratio were computed for each individual by SCORE and Framingham equations. The resulting rates were compared with those derived from national vital statistics. Framingham overestimated French coronary deaths by 2.8 in men and 1.9 in women, and cardiovascular deaths by 1.5 in men and 1.3 in women. SCORE overestimated coronary death by 1.6 in men and 1.7 in women, and underestimated cardiovascular death by 0.94 in men and 0.85 in women. Our results revealed an exaggerated representation of coronary among cardiovascular death predicted by Framingham, with coronary death exceeding cardiovascular death in some individual profiles. Sensitivity analyses gave some insights to explain the internal inconsistency of the Framingham equations. Evidence is that SCORE should be preferred to Framingham to predict cardiovascular death risk in French population. This discrepancy between prediction scores is likely to be observed in other populations. To improve the validation of risk equations, specific guidelines should be issued to harmonize the outcomes definition across epidemiologic studies. Prediction models should be calibrated for risk differences in the space and time dimensions.
ERIC Educational Resources Information Center
Ketterer, Holly L.; Han, Kyunghee; Hur, Jaehong; Moon, Kyungjoo
2010-01-01
In response to the concern that Minnesota Multiphasic Personality Inventory-2 (MMPI-2; J. N. Butcher, W. Dahlstrom, J. R. Graham, A. Tellegen, & B. Kaemmer, 1989; J. N. Butcher et al., 2001) Variable Response Inconsistency (VRIN) and True Response Inconsistency (TRIN) score invalidity criteria recommended for use with American samples results…
Gregory M. Filip
1989-01-01
In 1979, an equation was developed to estimate the percentage of current and future timber volume loss due to stem decay caused by Heterobasidion annosum and other fungi in advance regeneration stands of grand and white fir in eastern Oregon and Washington. Methods for using and testing the equation are presented. Extensive testing in 1988 showed the...
Spurious Numerical Solutions Of Differential Equations
NASA Technical Reports Server (NTRS)
Lafon, A.; Yee, H. C.
1995-01-01
Paper presents detailed study of spurious steady-state numerical solutions of differential equations that contain nonlinear source terms. Main objectives of this study are (1) to investigate how well numerical steady-state solutions of model nonlinear reaction/convection boundary-value problem mimic true steady-state solutions and (2) to relate findings of this investigation to implications for interpretation of numerical results from computational-fluid-dynamics algorithms and computer codes used to simulate reacting flows.
Reference values and equations reference of balance for children of 8 to 12 years.
Libardoni, Thiele de Cássia; Silveira, Carolina Buzzi da; Sinhorim, Larissa Milani Brognoli; Oliveira, Anamaria Siriani de; Santos, Márcio José Dos; Santos, Gilmar Moraes
2018-02-01
There are still no normative data in balance sway for school-age children in Brazil. We aimed to establish the reference ranges for balance scores and to develop prediction equations for estimation of balance scores in children aged 8 to 12 years old. The study included 165 healthy children (83 boys and 82 girls; age, 8-12 years) recruited from a public school in the city of Florianópolis, Santa Catarina, Brazil. We used the Sensory Organization Test to assess the balance scores and both a digital scale and a stadiometer to measure the anthropometric variables. We tested a stepwise multiple-regression model with sex, height, weight, and mid-thigh circumference of the dominant leg as predictors of the balance score. For all experimental conditions, girls' age accounted for over 85% of the variability in balance scores; while, boys' age accounted only 55% of the variability in balance scores. Therefore, balance scores increase with age for boys and girls. This study described the ranges of age- and sex-specific normative values for balance scores in children during 6 different testing conditions established by the sensory organization test. We confirmed that age was the predictor that best explained the variability in balance scores in children between 8 and 12 years old. This study stimulates a new and more comprehensive study to estimate balance scores from prediction equations for overall Brazilian pediatric population. Copyright © 2017 Elsevier B.V. All rights reserved.
Quantum power functional theory for many-body dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schmidt, Matthias, E-mail: Matthias.Schmidt@uni-bayreuth.de
2015-11-07
We construct a one-body variational theory for the time evolution of nonrelativistic quantum many-body systems. The position- and time-dependent one-body density, particle current, and time derivative of the current act as three variational fields. The generating (power rate) functional is minimized by the true current time derivative. The corresponding Euler-Lagrange equation, together with the continuity equation for the density, forms a closed set of one-body equations of motion. Space- and time-nonlocal one-body forces are generated by the superadiabatic contribution to the functional. The theory applies to many-electron systems.
ERIC Educational Resources Information Center
Lamborn, Susie D.; And Others
1991-01-01
Of 4,100 adolescents, those who characterized their parents as authoritative scored highest on psychosocial competence and lowest on behavioral dysfunction. The reverse was true for neglected adolescents. Adolescents from authoritarian homes scored high on obedience but low on self-perception. Adolescents from indulgent homes evidenced…
NASA Technical Reports Server (NTRS)
Rybicki, G. B.; Hummer, D. G.
1991-01-01
A method is presented for solving multilevel transfer problems when nonoverlapping lines and background continuum are present and active continuum transfer is absent. An approximate lambda operator is employed to derive linear, 'preconditioned', statistical-equilibrium equations. A method is described for finding the diagonal elements of the 'true' numerical lambda operator, and therefore for obtaining the coefficients of the equations. Iterations of the preconditioned equations, in conjunction with the transfer equation's formal solution, are used to solve linear equations. Some multilevel problems are considered, including an eleven-level neutral helium atom. Diagonal and tridiagonal approximate lambda operators are utilized in the problems to examine the convergence properties of the method, and it is found to be effective for the line transfer problems.
An Approach to Scoring and Equating Tests with Binary Items: Piloting With Large-Scale Assessments
ERIC Educational Resources Information Center
Dimitrov, Dimiter M.
2016-01-01
This article describes an approach to test scoring, referred to as "delta scoring" (D-scoring), for tests with dichotomously scored items. The D-scoring uses information from item response theory (IRT) calibration to facilitate computations and interpretations in the context of large-scale assessments. The D-score is computed from the…
T56. AN EXPLORATORY ANALYSIS CONVERTING SCORES BETWEEN THE PANSS AND BNSS
Kott, Alan; Daniel, David
2018-01-01
Abstract Background The Brief Negative Symptom Scale is a relatively new instrument designed specifically to measure the negative symptoms in schizophrenia. Recently more clinical trials include the BNSS scale as a secondary or exploratory outcome, typically along with the PANSS. In the current analysis we aimed at establishing the equations that would allow conversion between the BNSS scale total score and the PANSS negative subscale and PANSS negative factors score as well as conversion equations between the expressive deficits and avolition/apathy factors of the scales. (Kirkpatrick, 2011; Strauss, 2012) Methods Data from 518 schizophrenia clinical trials subjects with both PANSS and BNSS data available were used. Regression analyses predicting the BNSS total score with the PANSS negative subscale score, and the BNSS total score with the PANSS Negative factor (NFS) score were performed on data from all subjects. Regression analyses predicting the BNSS avolition/apathy factor (items 1, 2, 3, 5, 6, 7, and 8) with the PANSS avolition/apathy factor (items N2, N4 and G16) and the BNSS expressive deficits factor (items 4, 9, 10, 11, 12, and 13)with the expressive deficits factor (items N1, N3, N6, G5, G7, and G13)of the PANSS were performed on a sample of 318 subjects with individual BNSS item scores available. In addition to estimating the equations we as well calculated the Pearson’s correlations between the scales. Results The PANSS and BNSS avolition/apathy factors were highly correlated (r=0.70) as were the expressive deficit factors r=0.83). The following equations predicting the BNSS total score were obtained from regression analyses performed on 2,560 data points: BNSS_total = -11.64 + 2.10*PANSS_negative_subscale BNSS_total = -9.26 + 2.11*PANSS_NFS The following equations predicting the BNSS factor scores from the PANSS factor scores were obtained from regression analyses performed on 1,634 data points: BNSS_avolition/apathy = -2.40 + 2.38 * PANSS_avolition/apathy BNSS_expressive_deficit_factor = -4.21 + 1.27 * PANSS_expressive_deficit_factor Discussion The BNSS differs from the PANSS negative factor because it addresses all five currently recognized domains of negative symptoms including anhedonia and attempts to differentiate anticipatory from consummatory states. In our analysis we have replicated the strong correlation between the BNSS total score and PANSS negative subscale and newly identified strong correlations between the BNSS total score and NFS as well as strong correlations between the avolotion/apathy and expressive deficit factors of the BNSS and the PANSS scales. (Kirkpatrick, 2011)The provided equations offer a useful tool allowing researchers and clinicians to easily convert the data between the instruments for reasons such as pooling data from multiple trials using one of the instruments, to allow interpretation of results within the context of previously conducted research, etc. but as well offer a framework for risk based monitoring to identify data deviating from the expected relationship and allow for a targeted exploration of the causes for such a disagreement. The data used for analysis included not only subjects with predominantly negative symptoms but as well acutely psychotic subjects as well as subjects in stable conditions allowing therefore to generalize the results across the majority of schizophrenic subjects. This post-hoc analysis is exploratory. We plan to further explore the potential utility of equations addressing the relationships among schizophrenia measures of symptom severity in an iterative manner with larger datasets.
2008-08-01
version of NCAPS, participants higher in cognitive ability and reading ability were able to produce higher fakability scores. Higher intelligence ... intelligence and reading ability. Therefore, the adaptive paired- comparison NCAPS is very likely to provide scores close to the true trait scores for...regardless of the intelligence or reading levels associated with those taking the adaptive NCAPS; it will be difficult to fake the adaptive paired
Abreu, P C; Greenberg, D A; Hodge, S E
1999-09-01
Several methods have been proposed for linkage analysis of complex traits with unknown mode of inheritance. These methods include the LOD score maximized over disease models (MMLS) and the "nonparametric" linkage (NPL) statistic. In previous work, we evaluated the increase of type I error when maximizing over two or more genetic models, and we compared the power of MMLS to detect linkage, in a number of complex modes of inheritance, with analysis assuming the true model. In the present study, we compare MMLS and NPL directly. We simulated 100 data sets with 20 families each, using 26 generating models: (1) 4 intermediate models (penetrance of heterozygote between that of the two homozygotes); (2) 6 two-locus additive models; and (3) 16 two-locus heterogeneity models (admixture alpha = 1.0,.7,.5, and.3; alpha = 1.0 replicates simple Mendelian models). For LOD scores, we assumed dominant and recessive inheritance with 50% penetrance. We took the higher of the two maximum LOD scores and subtracted 0.3 to correct for multiple tests (MMLS-C). We compared expected maximum LOD scores and power, using MMLS-C and NPL as well as the true model. Since NPL uses only the affected family members, we also performed an affecteds-only analysis using MMLS-C. The MMLS-C was both uniformly more powerful than NPL for most cases we examined, except when linkage information was low, and close to the results for the true model under locus heterogeneity. We still found better power for the MMLS-C compared with NPL in affecteds-only analysis. The results show that use of two simple modes of inheritance at a fixed penetrance can have more power than NPL when the trait mode of inheritance is complex and when there is heterogeneity in the data set.
Kunikata, Hiroko; Shiraishi, Yuko; Nakajima, Kazuo; Tanioka, Tetsuya; Tomotake, Masahito
2011-02-01
The purpose of this study was to demonstrate a causal model of the sense of having psychological comfortable space that is call 'ibasho' in Japanese and self-esteem in people with mental disorders who had difficulty in social activities. The subjects were 248 schizophrenia patients who were living in the community and receiving day care treatment. Data were collected from December 2007 to April 2009 using the Scale for the Sense of ibasho for persons with mentally ill (SSI) and the Rosenberg Self-Esteem Scale (RSES), and analyzed for cross-validation of construct validity by conducting covariance structure analysis. A relationship between the sense of having comfortable space and self-esteem was investigated. Multiple indicator models of the sense of having psychological comfortable space and self-esteem were evaluated using structural equation modeling. Furthermore, the SSI scores were compared between the high- and low-self-esteem groups. The path coefficient from the sense of having comfortable space to self-esteem was significant (0.80). High-self-esteem group scored significantly higher in the SSI subscales, 'the sense of recognizing my true self' and 'the sense of recognizing deep person-to-person relationships' than the low-self-esteem group. It was suggested that in order to help people with mental disorders improve self-esteem, it might be useful to support them in a way they can enhance the sense of having comfortable space.
Bobo, William V; Angleró, Gabriela C; Jenkins, Gregory; Hall-Flavin, Daniel K; Weinshilboum, Richard; Biernacka, Joanna M
2016-05-01
The study aimed to define thresholds of clinically significant change in 17-item Hamilton Depression Rating Scale (HDRS-17) scores using the Clinical Global Impression-Improvement (CGI-I) Scale as a gold standard. We conducted a secondary analysis of individual patient data from the Pharmacogenomic Research Network Antidepressant Medication Pharmacogenomic Study, an 8-week, single-arm clinical trial of citalopram or escitalopram treatment of adults with major depression. We used equipercentile linking to identify levels of absolute and percent change in HDRS-17 scores that equated with scores on the CGI-I at 4 and 8 weeks. Additional analyses equated changes in the HDRS-7 and Bech-6 scale scores with CGI-I scores. A CGI-I score of 2 (much improved) corresponded to an absolute decrease (improvement) in HDRS-17 total score of 11 points and a percent decrease of 50-57%, from baseline values. Similar results were observed for percent change in HDRS-7 and Bech-6 scores. Larger absolute (but not percent) decreases in HDRS-17 scores equated with CGI-I scores of 2 in persons with higher baseline depression severity. Our results support the consensus definition of response based on HDRS-17 scores (>50% decrease from baseline). A similar definition of response may apply to the HDRS-7 and Bech-6. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Multiple Linking in Equating and Random Scale Drift. Research Report. ETS RR-11-46
ERIC Educational Resources Information Center
Guo, Hongwen; Liu, Jinghua; Dorans, Neil; Feigenbaum, Miriam
2011-01-01
Maintaining score stability is crucial for an ongoing testing program that administers several tests per year over many years. One way to stall the drift of the score scale is to use an equating design with multiple links. In this study, we use the operational and experimental SAT® data collected from 44 administrations to investigate the effect…
Misinformation, partial knowledge and guessing in true/false tests.
Burton, Richard F
2002-09-01
Examiners disagree on whether or not multiple choice and true/false tests should be negatively marked. Much of the debate has been clouded by neglect of the role of misinformation and by vagueness regarding both the specification of test types and "partial knowledge" in relation to guessing. Moreover, variations in risk-taking in the face of negative marking have too often been treated in absolute terms rather than in relation to the effect of guessing on test unreliability. This paper aims to clarify these points and to compare the ill-effects on test reliability of guessing and of variable risk-taking. Three published studies on medical students are examined. These compare responses in true/false tests obtained with both negative marking and number-right scoring. The studies yield data on misinformation and on the extent to which students may fail to benefit from distrusted partial knowledge when there is negative marking. A simple statistical model is used to compare variations in risk-taking with test unreliability due to blind guessing under number-right scoring conditions. Partial knowledge should be least problematic with independent true/false items. The effect on test reliability of blind guessing under number-right conditions is generally greater than that due to the over-cautiousness of some students when there is negative marking.
On the dynamics of approximating schemes for dissipative nonlinear equations
NASA Technical Reports Server (NTRS)
Jones, Donald A.
1993-01-01
Since one can rarely write down the analytical solutions to nonlinear dissipative partial differential equations (PDE's), it is important to understand whether, and in what sense, the behavior of approximating schemes to these equations reflects the true dynamics of the original equations. Further, because standard error estimates between approximations of the true solutions coming from spectral methods - finite difference or finite element schemes, for example - and the exact solutions grow exponentially in time, this analysis provides little value in understanding the infinite time behavior of a given approximating scheme. The notion of the global attractor has been useful in quantifying the infinite time behavior of dissipative PDEs, such as the Navier-Stokes equations. Loosely speaking, the global attractor is all that remains of a sufficiently large bounded set in phase space mapped infinitely forward in time under the evolution of the PDE. Though the attractor has been shown to have some nice properties - it is compact, connected, and finite dimensional, for example - it is in general quite complicated. Nevertheless, the global attractor gives a way to understand how the infinite time behavior of approximating schemes such as the ones coming from a finite difference, finite element, or spectral method relates to that of the original PDE. Indeed, one can often show that such approximations also have a global attractor. We therefore only need to understand how the structure of the attractor for the PDE behaves under approximation. This is by no means a trivial task. Several interesting results have been obtained in this direction. However, we will not go into the details. We mention here that approximations generally lose information about the system no matter how accurate they are. There are examples that show certain parts of the attractor may be lost by arbitrary small perturbations of the original equations.
Zhang, Weisheng; Lin, Jiang; Wang, Shaowu; Lv, Peng; Wang, Lili; Liu, Hao; Chen, Caizhong; Zeng, Mengsu
2014-01-01
This study was aimed to evaluate the accuracy of "True Fast Imaging with Steady-State Precession" (TrueFISP) MR angiography (MRA) for diagnosis of renal arterial stenosis (RAS) in hypertensive patients. Twenty-two patients underwent both TrueFISP MRA and contrast-enhanced MRA (CE-MRA) on a 1.5-T MR imager. Volume of main renal arteries, length of maximal visible renal arteries, number of visualized branches, stenotic grade, and subjective quality were compared. Paired 2-tailed Student t test and Wilcoxon signed rank test were applied to evaluate the significance of these variables. Volume of main renal arteries, length of maximal visible renal arteries, and number of branches indicated no significant difference between the 2 techniques (P > 0.05). Stenotic degree of 10 RAS was greater on CE-MRA than on TrueFISP MRA. Qualitative scores from TrueFISP MRA were higher than those from CE-MRA (P < 0.05). TrueFISP MRA is a reliable and accurate method for evaluating RAS.
Nuclear data correlation between different isotopes via integral information
NASA Astrophysics Data System (ADS)
Rochman, Dimitri A.; Bauge, Eric; Vasiliev, Alexander; Ferroukhi, Hakim; Perret, Gregory
2018-05-01
This paper presents a Bayesian approach based on integral experiments to create correlations between different isotopes which do not appear with differential data. A simple Bayesian set of equations is presented with random nuclear data, similarly to the usual methods applied with differential data. As a consequence, updated nuclear data (cross sections,
Southmoreland Middle School: A Model of True Collaboration
ERIC Educational Resources Information Center
Principal Leadership, 2013
2013-01-01
In 2003, Southmoreland was a seventh- and eighth-grade junior high school in the warning category under NCLB for failing to make adequate yearly progress. Scores on state tests were grim--only 39% of the students were proficient or advanced in math and 55% in reading. Two years later, the combined improvement in reading and math scores resulted in…
On the fakeness of fake supergravity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Celi, Alessio; Proeyen, Antoine van; Ceresole, Anna
2005-02-15
We revisit and complete the study of curved BPS-domain walls in matter-coupled 5D, N=2 supergravity and carefully analyze the relation to gravitational theories known as ''fake supergravities.'' We first show that curved BPS-domain walls require the presence of nontrivial hypermultiplet scalars, whereas walls that are solely supported by vector multiplet scalars are necessarily flat, due to the constraints from very special geometry. We then recover fake supergravity as the effective description of true supergravity where one restricts the attention to the flowing scalar field of a given BPS-domain wall. In general, however, true supergravity can be simulated by fake supergravitymore » at most locally, based upon two choices: (i) a suitable adapted coordinate system on the scalar manifold, such that only one scalar field plays a dynamical role, and (ii) a gauge fixing of the SU(2) connection on the quaternionic-Kaehler manifold, as this connection does not fit the simple formalism of fake supergravity. Employing these gauge and coordinate choices, the BPS-equations for both vector and hypermultiplet scalars become identical to the fake supergravity equations, once the line of flow is determined by the full supergravity equations.« less
He, Jianliang; Zhang, Datong; Zhang, Weiweng; Qiu, Cheng; Zhang, Wen
2017-01-01
The deformation behavior of homogenized Al–7.5Zn–1.5Mg–0.2Cu–0.2Zr alloy has been studied by a set of isothermal hot compression tests, which were carried out over the temperature ranging from 350 °C to 450 °C and the strain rate ranging from 0.001 s−1 to 10 s−1 on Gleeble-3500 thermal simulation machine. The associated microstructure was studied using electron back scattered diffraction (EBSD) and transmission electron microscopy (TEM). The results showed that the flow stress is sensitive to strain rate and deformation temperature. The shape of true stress-strain curves obtained at a low strain rate (≤0.1 s−1) conditions shows the characteristic of dynamic recrystallization (DRX). Two Arrhenius-typed constitutive equation without and with strain compensation were established based on the true stress-strain curves. Constitutive equation with strain compensation has more precise predictability. The main softening mechanism of the studied alloy is dynamic recovery (DRV) accompanied with DRX, particularly at deformation conditions, with low Zener-Holloman parameters. PMID:29057825
Improving IQ measurement in intellectual disabilities using true deviation from population norms
2014-01-01
Background Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. Methods We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. Results We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Conclusion Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment. PMID:26491488
Improving IQ measurement in intellectual disabilities using true deviation from population norms.
Sansone, Stephanie M; Schneider, Andrea; Bickel, Erika; Berry-Kravis, Elizabeth; Prescott, Christina; Hessl, David
2014-01-01
Intellectual disability (ID) is characterized by global cognitive deficits, yet the very IQ tests used to assess ID have limited range and precision in this population, especially for more impaired individuals. We describe the development and validation of a method of raw z-score transformation (based on general population norms) that ameliorates floor effects and improves the precision of IQ measurement in ID using the Stanford Binet 5 (SB5) in fragile X syndrome (FXS; n = 106), the leading inherited cause of ID, and in individuals with idiopathic autism spectrum disorder (ASD; n = 205). We compared the distributional characteristics and Q-Q plots from the standardized scores with the deviation z-scores. Additionally, we examined the relationship between both scoring methods and multiple criterion measures. We found evidence that substantial and meaningful variation in cognitive ability on standardized IQ tests among individuals with ID is lost when converting raw scores to standardized scaled, index and IQ scores. Use of the deviation z- score method rectifies this problem, and accounts for significant additional variance in criterion validation measures, above and beyond the usual IQ scores. Additionally, individual and group-level cognitive strengths and weaknesses are recovered using deviation scores. Traditional methods for generating IQ scores in lower functioning individuals with ID are inaccurate and inadequate, leading to erroneously flat profiles. However assessment of cognitive abilities is substantially improved by measuring true deviation in performance from standardization sample norms. This work has important implications for standardized test development, clinical assessment, and research for which IQ is an important measure of interest in individuals with neurodevelopmental disorders and other forms of cognitive impairment.
Lod scores for gene mapping in the presence of marker map uncertainty.
Stringham, H M; Boehnke, M
2001-07-01
Multipoint lod scores are typically calculated for a grid of locus positions, moving the putative disease locus across a fixed map of genetic markers. Changing the order of a set of markers and/or the distances between the markers can make a substantial difference in the resulting lod score curve and the location and height of its maximum. The typical approach of using the best maximum likelihood marker map is not easily justified if other marker orders are nearly as likely and give substantially different lod score curves. To deal with this problem, we propose three weighted multipoint lod score statistics that make use of information from all plausible marker orders. In each of these statistics, the information conditional on a particular marker order is included in a weighted sum, with weight equal to the posterior probability of that order. We evaluate the type 1 error rate and power of these three statistics on the basis of results from simulated data, and compare these results to those obtained using the best maximum likelihood map and the map with the true marker order. We find that the lod score based on a weighted sum of maximum likelihoods improves on using only the best maximum likelihood map, having a type 1 error rate and power closest to that of using the true marker order in the simulation scenarios we considered. Copyright 2001 Wiley-Liss, Inc.
A General Linear Method for Equating with Small Samples
ERIC Educational Resources Information Center
Albano, Anthony D.
2015-01-01
Research on equating with small samples has shown that methods with stronger assumptions and fewer statistical estimates can lead to decreased error in the estimated equating function. This article introduces a new approach to linear observed-score equating, one which provides flexible control over how form difficulty is assumed versus estimated…
ERIC Educational Resources Information Center
Wei, Youhua; Morgan, Rick
2016-01-01
As an alternative to common-item equating when common items do not function as expected, the single-group growth model (SGGM) scaling uses common examinees or repeaters to link test scores on different forms. The SGGM scaling assumes that, for repeaters taking adjacent administrations, the conditional distribution of scale scores in later…
Expansion of the gravitational potential with computerized Poisson series
NASA Technical Reports Server (NTRS)
Broucke, R.
1976-01-01
The paper describes a recursive formulation for the expansion of the gravitational potential valid for both the tesseral and zonal harmonics. The expansion is primarily in rectangular coordinates, but the classical orbit elements or equinoctial orbit elements can be easily substituted. The equations of motion for the zonal harmonics in both classical and equinoctial orbital elements are described in a form which will result in closed-form expressions for the first-order perturbations. In order to achieve this result, the true longitude or true anomaly have to be used as independent variables.
Fat scoring: Sources of variability
Krementz, D.G.; Pendleton, G.W.
1990-01-01
Fat scoring is a widely used nondestructive method of assessing total body fat in birds. This method has not been rigorously investigated. We investigated inter- and intraobserver variability in scoring as well as the predictive ability of fat scoring using five species of passerines. Between-observer variation in scoring was variable and great at times. Observers did not consistently score species higher or lower relative to other observers nor did they always score birds with more total body fat higher. We found that within-observer variation was acceptable but was dependent on the species being scored. The precision of fat scoring was species-specific and for most species, fat scores accounted for less than 50% of the variation in true total body fat. Overall, we would describe fat scoring as a fairly precise method of indexing total body fat but with limited reliability among observers.
Prediction of true test scores from observed item scores and ancillary data.
Haberman, Shelby J; Yao, Lili; Sinharay, Sandip
2015-05-01
In many educational tests which involve constructed responses, a traditional test score is obtained by adding together item scores obtained through holistic scoring by trained human raters. For example, this practice was used until 2008 in the case of GRE(®) General Analytical Writing and until 2009 in the case of TOEFL(®) iBT Writing. With use of natural language processing, it is possible to obtain additional information concerning item responses from computer programs such as e-rater(®). In addition, available information relevant to examinee performance may include scores on related tests. We suggest application of standard results from classical test theory to the available data to obtain best linear predictors of true traditional test scores. In performing such analysis, we require estimation of variances and covariances of measurement errors, a task which can be quite difficult in the case of tests with limited numbers of items and with multiple measurements per item. As a consequence, a new estimation method is suggested based on samples of examinees who have taken an assessment more than once. Such samples are typically not random samples of the general population of examinees, so that we apply statistical adjustment methods to obtain the needed estimated variances and covariances of measurement errors. To examine practical implications of the suggested methods of analysis, applications are made to GRE General Analytical Writing and TOEFL iBT Writing. Results obtained indicate that substantial improvements are possible both in terms of reliability of scoring and in terms of assessment reliability. © 2015 The British Psychological Society.
ERIC Educational Resources Information Center
von Davier, Alina A.; Holland, Paul W.; Livingston, Samuel A.; Casabianca, Jodi; Grant, Mary C.; Martin, Kathleen
2006-01-01
This study examines how closely the kernel equating (KE) method (von Davier, Holland, & Thayer, 2004a) approximates the results of other observed-score equating methods--equipercentile and linear equatings. The study used pseudotests constructed of item responses from a real test to simulate three equating designs: an equivalent groups (EG)…
Spacetime dynamics of a Higgs vacuum instability during inflation
East, William E.; Kearney, John; Shakya, Bibhushan; ...
2017-01-31
A remarkable prediction of the Standard Model is that, in the absence of corrections lifting the energy density, the Higgs potential becomes negative at large field values. If the Higgs field samples this part of the potential during inflation, the negative energy density may locally destabilize the spacetime. Here, we use numerical simulations of the Einstein equations to study the evolution of inflation-induced Higgs fluctuations as they grow towards the true (negative-energy) minimum. Our simulations show that forming a single patch of true vacuum in our past light cone during inflation is incompatible with the existence of our Universe; themore » boundary of the true vacuum region grows outward in a causally disconnected manner from the crunching interior, which forms a black hole. We also find that these black hole horizons may be arbitrarily elongated—even forming black strings—in violation of the hoop conjecture. Furthermore, by extending the numerical solution of the Fokker-Planck equation to the exponentially suppressed tails of the field distribution at large field values, we derive a rigorous correlation between a future measurement of the tensor-to-scalar ratio and the scale at which the Higgs potential must receive stabilizing corrections in order for the Universe to have survived inflation until today.« less
Using Latent Sleepiness to Evaluate an Important Effect of Promethazine
NASA Technical Reports Server (NTRS)
Feiveson, Alan H.; Hayat, Matthew; Vksman, Zalman; Putcha, Laksmi
2007-01-01
Astronauts often use promethazine (PMZ) to counteract space motion sickness; however PMZ may cause drowsiness, which might impair cognitive function. In a NASA ground study, subjects received PMZ and their cognitive performance was then monitored over time. Subjects also reported sleepiness using the Karolinska Sleepiness Score (KSS), which ranges from 1 - 9. A problem arises when using KSS to establish an association between true sleepiness and performance because KSS scores tend to overly concentrate on the values 3 (fairly awake) and 7 (moderately tired). Therefore, we defined a latent sleepiness measure as a continuous random variable describing a subject s actual, but unobserved true state of sleepiness through time. The latent sleepiness and observed KSS are associated through a conditional probability model, which when coupled with demographic factors, predicts performance.
Using a Linear Regression Method to Detect Outliers in IRT Common Item Equating
ERIC Educational Resources Information Center
He, Yong; Cui, Zhongmin; Fang, Yu; Chen, Hanwei
2013-01-01
Common test items play an important role in equating alternate test forms under the common item nonequivalent groups design. When the item response theory (IRT) method is applied in equating, inconsistent item parameter estimates among common items can lead to large bias in equated scores. It is prudent to evaluate inconsistency in parameter…
An NCME Instructional Module on Population Invariance in Linking and Equating
ERIC Educational Resources Information Center
Huggins, Anne C.; Penfield, Randall D.
2012-01-01
A goal for any linking or equating of two or more tests is that the linking function be invariant to the population used in conducting the linking or equating. Violations of population invariance in linking and equating jeopardize the fairness and validity of test scores, and pose particular problems for test-based accountability programs that…
ERIC Educational Resources Information Center
Lin, Peng; Dorans, Neil; Weeks, Jonathan
2016-01-01
The nonequivalent groups with anchor test (NEAT) design is frequently used in test score equating or linking. One important assumption of the NEAT design is that the anchor test is a miniversion of the 2 tests to be equated/linked. When the content of the 2 tests is different, it is not possible for the anchor test to be adequately representative…
ERIC Educational Resources Information Center
Klinger, Don A.; Rogers, W. Todd
2003-01-01
The estimation accuracy of procedures based on classical test score theory and item response theory (generalized partial credit model) were compared for examinations consisting of multiple-choice and extended-response items. Analysis of British Columbia Scholarship Examination results found an error rate of about 10 percent for both methods, with…
The Importance of Relying on the Manual: Scoring Error Variance in the WISC-IV Vocabulary Subtest
ERIC Educational Resources Information Center
Erdodi, Laszlo A.; Richard, David C. S.; Hopwood, Christopher
2009-01-01
Classical test theory assumes that ability level has no effect on measurement error. Newer test theories, however, argue that the precision of a measurement instrument changes as a function of the examinee's true score. Research has shown that administration errors are common in the Wechsler scales and that subtests requiring subjective scoring…
Curtis, David; Knight, Jo; Sham, Pak C
2005-09-01
Although LOD score methods have been applied to diseases with complex modes of inheritance, linkage analysis of quantitative traits has tended to rely on non-parametric methods based on regression or variance components analysis. Here, we describe a new method for LOD score analysis of quantitative traits which does not require specification of a mode of inheritance. The technique is derived from the MFLINK method for dichotomous traits. A range of plausible transmission models is constructed, constrained to yield the correct population mean and variance for the trait but differing with respect to the contribution to the variance due to the locus under consideration. Maximized LOD scores under homogeneity and admixture are calculated, as is a model-free LOD score which compares the maximized likelihoods under admixture assuming linkage and no linkage. These LOD scores have known asymptotic distributions and hence can be used to provide a statistical test for linkage. The method has been implemented in a program called QMFLINK. It was applied to data sets simulated using a variety of transmission models and to a measure of monoamine oxidase activity in 105 pedigrees from the Collaborative Study on the Genetics of Alcoholism. With the simulated data, the results showed that the new method could detect linkage well if the true allele frequency for the trait was close to that specified. However, it performed poorly on models in which the true allele frequency was much rarer. For the Collaborative Study on the Genetics of Alcoholism data set only a modest overlap was observed between the results obtained from the new method and those obtained when the same data were analysed previously using regression and variance components analysis. Of interest is that D17S250 produced a maximized LOD score under homogeneity and admixture of 2.6 but did not indicate linkage using the previous methods. However, this region did produce evidence for linkage in a separate data set, suggesting that QMFLINK may have been able to detect a true linkage which was not picked up by the other methods. The application of model-free LOD score analysis to quantitative traits is novel and deserves further evaluation of its merits and disadvantages relative to other methods.
ERIC Educational Resources Information Center
Sinharay, Sandip; Holland, Paul W.
2008-01-01
The nonequivalent groups with anchor test (NEAT) design involves missing data that are missing by design. Three popular equating methods that can be used with a NEAT design are the poststratification equating method, the chain equipercentile equating method, and the item-response-theory observed-score-equating method. These three methods each…
Olives, Juan
2010-03-03
The thermodynamics and mechanics of the surface of a deformable body are studied here, following and refining the general approach of Gibbs. It is first shown that the 'local' thermodynamic variables of the state of the surface are only the temperature, the chemical potentials and the surface strain tensor (true thermodynamic variables, for a viscoelastic solid or a viscous fluid). A new definition of the surface stress is given and the corresponding surface thermodynamics equations are presented. The mechanical equilibrium equation at the surface is then obtained. It involves the surface stress and is similar to the Cauchy equation for the volume. Its normal component is a generalization of the Laplace equation. At a (body-fluid-fluid) triple contact line, two equations are obtained, which represent: (i) the equilibrium of the forces (surface stresses) for a triple line fixed on the body; (ii) the equilibrium relative to the motion of the line with respect to the body. This last equation leads to a strong modification of Young's classical capillary equation.
NASA Astrophysics Data System (ADS)
EL-Kalaawy, O. H.; Moawad, S. M.; Wael, Shrouk
The propagation of nonlinear waves in unmagnetized strongly coupled dusty plasma with Boltzmann distributed electrons, iso-nonthermal distributed ions and negatively charged dust grains is considered. The basic set of fluid equations is reduced to the Schamel Kadomtsev-Petviashvili (S-KP) equation by using the reductive perturbation method. The variational principle and conservation laws of S-KP equation are obtained. It is shown that the S-KP equation is non-integrable using Painlevé analysis. A set of new exact solutions are obtained by auto-Bäcklund transformations. The stability analysis is discussed for the existence of dust acoustic solitary waves (DASWs) and it is found that the physical parameters have strong effects on the stability criterion. In additional to, the electric field and the true Mach number of this solution are investigated. Finally, we will study the physical meanings of solutions.
ERIC Educational Resources Information Center
Lee, Yi-Hsuan; von Davier, Alina A.
2008-01-01
The kernel equating method (von Davier, Holland, & Thayer, 2004) is based on a flexible family of equipercentile-like equating functions that use a Gaussian kernel to continuize the discrete score distributions. While the classical equipercentile, or percentile-rank, equating method carries out the continuization step by linear interpolation,…
Notes on a General Framework for Observed Score Equating. Research Report. ETS RR-08-59
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul
2008-01-01
The purpose of this paper is to extend von Davier, Holland, and Thayer's (2004b) framework of kernel equating so that it can incorporate raw data and traditional equipercentile equating methods. One result of this more general framework is that previous equating methodology research can be viewed more comprehensively. Another result is that the…
Formal faculty observation and assessment of bedside skills for 3rd-year neurology clerks
Mooney, Christopher; Wexler, Erika; Mink, Jonathan; Post, Jennifer; Jozefowicz, Ralph F.
2016-01-01
Objective: To evaluate the feasibility and utility of instituting a formalized bedside skills evaluation (BSE) for 3rd-year medical students on the neurology clerkship. Methods: A neurologic BSE was developed for 3rd-year neurology clerks at the University of Rochester for the 2012–2014 academic years. Faculty directly observed 189 students completing a full history and neurologic examination on real inpatients. Mock grades were calculated utilizing the BSE in the final grade, and number of students with a grade difference was determined when compared to true grade. Correlation was explored between the BSE and clinical scores, National Board of Medical Examiners (NBME) scores, case complexity, and true final grades. A survey was administered to students to assess their clinical skills exposure and the usefulness of the BSE. Results: Faculty completed and submitted a BSE form for 88.3% of students. There was a mock final grade change for 13.2% of students. Correlation coefficients between BSE score and clinical score/NBME score were 0.36 and 0.35, respectively. A statistically significant effect of BSE was found on final clerkship grade (F2,186 = 31.9, p < 0.0001). There was no statistical difference between BSE score and differing case complexities. Conclusions: Incorporating a formal faculty-observed BSE into the 3rd year neurology clerkship was feasible. Low correlation between BSE score and other evaluations indicated a unique measurement to contribute to student grade. Using real patients with differing case complexity did not alter the grade. PMID:27770072
Formal faculty observation and assessment of bedside skills for 3rd-year neurology clerks.
Thompson Stone, Robert; Mooney, Christopher; Wexler, Erika; Mink, Jonathan; Post, Jennifer; Jozefowicz, Ralph F
2016-11-22
To evaluate the feasibility and utility of instituting a formalized bedside skills evaluation (BSE) for 3rd-year medical students on the neurology clerkship. A neurologic BSE was developed for 3rd - year neurology clerks at the University of Rochester for the 2012-2014 academic years. Faculty directly observed 189 students completing a full history and neurologic examination on real inpatients. Mock grades were calculated utilizing the BSE in the final grade, and number of students with a grade difference was determined when compared to true grade. Correlation was explored between the BSE and clinical scores, National Board of Medical Examiners (NBME) scores, case complexity, and true final grades. A survey was administered to students to assess their clinical skills exposure and the usefulness of the BSE. Faculty completed and submitted a BSE form for 88.3% of students. There was a mock final grade change for 13.2% of students. Correlation coefficients between BSE score and clinical score/NBME score were 0.36 and 0.35, respectively. A statistically significant effect of BSE was found on final clerkship grade (F 2,186 = 31.9, p < 0.0001). There was no statistical difference between BSE score and differing case complexities. Incorporating a formal faculty-observed BSE into the 3rd year neurology clerkship was feasible. Low correlation between BSE score and other evaluations indicated a unique measurement to contribute to student grade. Using real patients with differing case complexity did not alter the grade. © 2016 American Academy of Neurology.
Equating in Small-Scale Language Testing Programs
ERIC Educational Resources Information Center
LaFlair, Geoffrey T.; Isbell, Daniel; May, L. D. Nicolas; Gutierrez Arvizu, Maria Nelly; Jamieson, Joan
2017-01-01
Language programs need multiple test forms for secure administrations and effective placement decisions, but can they have confidence that scores on alternate test forms have the same meaning? In large-scale testing programs, various equating methods are available to ensure the comparability of forms. The choice of equating method is informed by…
Observed-Score Equating with a Heterogeneous Target Population
ERIC Educational Resources Information Center
Duong, Minh Q.; von Davier, Alina A.
2012-01-01
Test equating is a statistical procedure for adjusting for test form differences in difficulty in a standardized assessment. Equating results are supposed to hold for a specified target population (Kolen & Brennan, 2004; von Davier, Holland, & Thayer, 2004) and to be (relatively) independent of the subpopulations from the target population (see…
Balancing Score Adjusted Targeted Minimum Loss-based Estimation
Lendle, Samuel David; Fireman, Bruce; van der Laan, Mark J.
2015-01-01
Adjusting for a balancing score is sufficient for bias reduction when estimating causal effects including the average treatment effect and effect among the treated. Estimators that adjust for the propensity score in a nonparametric way, such as matching on an estimate of the propensity score, can be consistent when the estimated propensity score is not consistent for the true propensity score but converges to some other balancing score. We call this property the balancing score property, and discuss a class of estimators that have this property. We introduce a targeted minimum loss-based estimator (TMLE) for a treatment-specific mean with the balancing score property that is additionally locally efficient and doubly robust. We investigate the new estimator’s performance relative to other estimators, including another TMLE, a propensity score matching estimator, an inverse probability of treatment weighted estimator, and a regression-based estimator in simulation studies. PMID:26561539
Network Reconstruction From High-Dimensional Ordinary Differential Equations.
Chen, Shizhe; Shojaie, Ali; Witten, Daniela M
2017-01-01
We consider the task of learning a dynamical system from high-dimensional time-course data. For instance, we might wish to estimate a gene regulatory network from gene expression data measured at discrete time points. We model the dynamical system nonparametrically as a system of additive ordinary differential equations. Most existing methods for parameter estimation in ordinary differential equations estimate the derivatives from noisy observations. This is known to be challenging and inefficient. We propose a novel approach that does not involve derivative estimation. We show that the proposed method can consistently recover the true network structure even in high dimensions, and we demonstrate empirical improvement over competing approaches. Supplementary materials for this article are available online.
Constitutive Model for Hot Deformation of the Cu-Zr-Ce Alloy
NASA Astrophysics Data System (ADS)
Zhang, Yi; Sun, Huili; Volinsky, Alex A.; Wang, Bingjie; Tian, Baohong; Liu, Yong; Song, Kexing
2018-02-01
Hot compressive deformation behavior of the Cu-Zr-Ce alloy has been investigated according to the hot deformation tests in the 550-900 °C temperature range and 0.001-10 s-1 strain rate range. Based on the true stress-true strain curves, the flow stress behavior of the Cu-Zr-Ce alloy was investigated. Microstructure evolution was observed by optical microscopy. Based on the experimental results, a constitutive equation, which reflects the relationships between the stress, strain, strain rate and temperature, has been established. Material constants n, α, Q and ln A were calculated as functions of strain. The equation predicting the flow stress combined with these materials constants has been proposed. The predicted stress is consistent with experimental stress, indicating that developed constitutive equation can adequately predict the flow stress of the Cu-Zr-Ce alloy. Dynamic recrystallization critical strain was determined using the work hardening rate method. According to the dynamic material model, the processing maps for the Cu-Zr and Cu-Zr-Ce alloy were obtained at 0.4 and 0.5 strain. Based on the processing maps and microstructure observations, the optimal processing parameters for the two alloys were determined, and it was found that the addition of Ce can promote the hot workability of the Cu-Zr alloy.
Measurement Model Specification Error in LISREL Structural Equation Models.
ERIC Educational Resources Information Center
Baldwin, Beatrice; Lomax, Richard
This LISREL study examines the robustness of the maximum likelihood estimates under varying degrees of measurement model misspecification. A true model containing five latent variables (two endogenous and three exogenous) and two indicator variables per latent variable was used. Measurement model misspecification considered included errors of…
ERIC Educational Resources Information Center
Michaelides, Michalis P.; Haertel, Edward H.
2014-01-01
The standard error of equating quantifies the variability in the estimation of an equating function. Because common items for deriving equated scores are treated as fixed, the only source of variability typically considered arises from the estimation of common-item parameters from responses of samples of examinees. Use of alternative, equally…
New equations for predicting postoperative risk in patients with hip fracture.
Hirose, Jun; Ide, Junji; Irie, Hiroki; Kikukawa, Kenshi; Mizuta, Hiroshi
2009-12-01
Predicting the postoperative course of patients with hip fractures would be helpful for surgical planning and risk management. We therefore established equations to predict the morbidity and mortality rates in candidates for hip fracture surgery using the Estimation of Physiologic Ability and Surgical Stress (E-PASS) risk-scoring system. First we evaluated the correlation between the E-PASS scores and postoperative morbidity and mortality rates in all 722 patients surgically treated for hip fractures during the study period (Group A). Next we established equations to predict morbidity and mortality rates. We then applied these equations to all 633 patients with hip fractures treated at seven other hospitals (Group B) and compared the predicted and actual morbidity and mortality rates to assess the predictive ability of the E-PASS and Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (POSSUM) systems. The ratio of actual to predicted morbidity and mortality rates was closer to 1.0 with the E-PASS than the POSSUM system. Our data suggest the E-PASS scoring system is useful for defining postoperative risk and its underlying algorithm accurately predicts morbidity and mortality rates in patients with hip fractures before surgery. This information then can be used to manage their condition and potentially improve treatment outcomes. Level II, prognostic study. See the Guidelines for Authors for a complete description of levels of evidence.
Benson, Nicholas; Beaujean, A Alexander; Taub, Gordon E
2015-01-01
The Flynn effect (FE; i.e., increase in mean IQ scores over time) is commonly viewed as reflecting population shifts in intelligence, despite the fact that most FE studies have not investigated the assumption of score comparability. Consequently, the extent to which these mean differences in IQ scores reflect population shifts in cognitive abilities versus changes in the instruments used to measure these abilities is unclear. In this study, we used modern psychometric tools to examine the FE. First, we equated raw scores for each common subtest to be on the same scale across instruments. This enabled the combination of scores from all three instruments into one of 13 age groups before converting raw scores into Z scores. Second, using age-based standardized scores for standardization samples, we examined measurement invariance across the second (revised), third, and fourth editions of the Wechsler Adult Intelligence Scale. Results indicate that while scores were equivalent across the third and fourth editions, they were not equivalent across the second and third editions. Results suggest that there is some evidence for an increase in intelligence, but also call into question many published FE findings as presuming the instruments' scores are invariant when this assumption is not warranted.
A simple finite element method for non-divergence form elliptic equation
Mu, Lin; Ye, Xiu
2017-03-01
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
A simple finite element method for non-divergence form elliptic equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mu, Lin; Ye, Xiu
Here, we develop a simple finite element method for solving second order elliptic equations in non-divergence form by combining least squares concept with discontinuous approximations. This simple method has a symmetric and positive definite system and can be easily analyzed and implemented. We could have also used general meshes with polytopal element and hanging node in the method. We prove that our finite element solution approaches to the true solution when the mesh size approaches to zero. Numerical examples are tested that demonstrate the robustness and flexibility of the method.
A perturbative solution to metadynamics ordinary differential equation
NASA Astrophysics Data System (ADS)
Tiwary, Pratyush; Dama, James F.; Parrinello, Michele
2015-12-01
Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.
A perturbative solution to metadynamics ordinary differential equation.
Tiwary, Pratyush; Dama, James F; Parrinello, Michele
2015-12-21
Metadynamics is a popular enhanced sampling scheme wherein by periodic application of a repulsive bias, one can surmount high free energy barriers and explore complex landscapes. Recently, metadynamics was shown to be mathematically well founded, in the sense that the biasing procedure is guaranteed to converge to the true free energy surface in the long time limit irrespective of the precise choice of biasing parameters. A differential equation governing the post-transient convergence behavior of metadynamics was also derived. In this short communication, we revisit this differential equation, expressing it in a convenient and elegant Riccati-like form. A perturbative solution scheme is then developed for solving this differential equation, which is valid for any generic biasing kernel. The solution clearly demonstrates the robustness of metadynamics to choice of biasing parameters and gives further confidence in the widely used method.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Examined the sampling distributions of equating coefficients produced by the characteristic curve method for tests using graded and nominal response scoring using simulated data. For both models and across all three equating situations, the sampling distributions were generally bell-shaped and peaked, and occasionally had a small degree of…
Perak, Amanda M; Opotowsky, Alexander R; Walsh, Brian K; Esch, Jesse J; DiNardo, James A; Kussman, Barry D; Porras, Diego; Rhodes, Jonathan
2016-10-01
To assess the feasibility and accuracy of inert gas rebreathing (IGR) pulmonary blood flow (Qp) estimation in mechanically ventilated pediatric patients, potentially providing real-time noninvasive estimates of cardiac output. In mechanically ventilated patients in the pediatric catheterization laboratory, we compared IGR Qp with Qp estimates based upon the Fick equation using measured oxygen consumption (VO2) (FickTrue); for context, we compared FickTrue with a standard clinical short-cut, replacing measured with assumed VO2 in the Fick equation (FickLaFarge, FickLundell, FickSeckeler). IGR Qp and breath-by-breath VO2 were measured using the Innocor device. Sampled pulmonary arterial and venous saturations and hemoglobin concentration were used for Fick calculations. Qp estimates were compared using Bland-Altman agreement and Spearman correlation. The final analysis included 18 patients aged 4-23 years with weight >15 kg. Compared with the reference FickTrue, IGR Qp estimates correlated best and had the least systematic bias and narrowest 95% limits of agreement (results presented as mean bias ±95% limits of agreement): IGR -0.2 ± 1.1 L/min, r = 0.90; FickLaFarge +0.7 ± 2.2 L/min, r = 0.80; FickLundell +1.6 ± 2.9 L/min, r = 0.83; FickSeckeler +0.8 ± 2.5 L/min, r = 0.83. IGR estimation of Qp is feasible in mechanically ventilated patients weighing >15 kg, and agreement with FickTrue Qp estimates is better for IGR than for other Fick Qp estimates commonly used in pediatric catheterization. IGR is an attractive option for bedside monitoring of Qp in mechanically ventilated children. Copyright © 2016 Elsevier Inc. All rights reserved.
Verdam, Mathilde G E; Oort, Frans J; van der Linden, Yvette M; Sprangers, Mirjam A G
2015-03-01
Missing data due to attrition present a challenge for the assessment and interpretation of change and response shift in HRQL outcomes. The objective was to handle such missingness and to assess response shift and 'true change' with the use of an attrition-based multigroup structural equation modeling (SEM) approach. Functional limitations and health impairments were measured in 1,157 cancer patients, who were treated with palliative radiotherapy for painful bone metastases, before [time (T) 0], every week after treatment (T1 through T12), and then monthly for up to 2 years (T13 through T24). To handle missing data due to attrition, the SEM procedure was extended to a multigroup approach, in which we distinguished three groups: short survival (3-5 measurements), medium survival (6-12 measurements), and long survival (>12 measurements). Attrition after third, sixth, and 13th measurement occasions was 11, 24, and 41 %, respectively. Results show that patterns of change in functional limitations and health impairments differ between patients with short, medium, or long survival. Moreover, three response-shift effects were detected: recalibration of 'pain' and 'sickness' and reprioritization of 'physical functioning.' If response-shift effects would not have been taken into account, functional limitations and health impairments would generally be underestimated across measurements. The multigroup SEM approach enables the analysis of data from patients with different patterns of missing data due to attrition. This approach does not only allow for detection of response shift and assessment of true change across measurements, but also allow for detection of differences in response shift and true change across groups of patients with different attrition rates.
ERIC Educational Resources Information Center
Shih, Ching-Lin; Liu, Tien-Hsiang; Wang, Wen-Chung
2014-01-01
The simultaneous item bias test (SIBTEST) method regression procedure and the differential item functioning (DIF)-free-then-DIF strategy are applied to the logistic regression (LR) method simultaneously in this study. These procedures are used to adjust the effects of matching true score on observed score and to better control the Type I error…
Kosulwat, Somkiat; Greenfield, Heather; Buckle, Kenneth A
2003-12-01
The true retention of nutrients (proximate principles and cholesterol) on cooking of three retail cuts from lambs classified by weight, sex and fatness score was investigated. Fat retentions of the total cut and of the lean portion of lamb legs and mid-loin chops were not affected by carcass fatness, weight and sex or their interactions, however, the fat retention of the total cut and of the lean portion of forequarter chops was affected by fat score, with forequarter chops from fat score 1 retaining more fat than did chops of carcasses of higher fat score. Overall, fat was lost by all cuts (total cut) on cooking, with only 70-80% of fat being retained, but fat content of lean only increased on cooking (retention >100%), indicating the passage of fat into the lean portion from the external fat cover during the cooking process. Carcass factors and their interactions had little or no effect on the protein, water and ash retentions of the total cut or the lean portions of the three cuts. Cholesterol retention by the lean portion of three cooked lamb cuts was not affected by any carcass factors or their interactions. Cholesterol retentions were ∼99% for total cuts and tended to be ∼102% for the lean portions.
Literacy and Graphic Communication: Getting the Words out
ERIC Educational Resources Information Center
Fletcher, Tina; Sampson, Mary Beth
2012-01-01
Although it may seem logical to assume that giftedness automatically equates with high academic achievement, research has shown that assumption is not always true especially in areas that deal with the communication of understanding and knowledge of a subject. If problems occur in graphic output venues that include handwriting, intervention…
ERIC Educational Resources Information Center
Puhan, Gautam
2009-01-01
The purpose of this study is to determine the extent of scale drift on a test that employs cut scores. It was essential to examine scale drift for this testing program because new forms in this testing program are often put on scale through a series of intermediate equatings (known as equating chains). This process may cause equating error to…
1985-04-01
EM 32 12 MICROCOP REOUTO TETCHR NTOA B URA FSA4ARS16- AFHRL-TR-84-64 9 AIR FORCE 6 __ H EQUIPERCENTILE TEST EQUATING: THE EFFECTS OF PRESMOOTHING AND...combined or compound presmoother and a presmoothing method based on a particular model of test scores. Of the seven methods of presmoothing the score...unsmoothed distributions, the smoothing of that sequence of differences by the same compound method, and, finally, adding the smoothed differences back
ERIC Educational Resources Information Center
Livingston, Samuel A.; Chen, Haiwen H.
2015-01-01
Quantitative information about test score reliability can be presented in terms of the distribution of equated scores on an alternate form of the test for test takers with a given score on the form taken. In this paper, we describe a procedure for estimating that distribution, for any specified score on the test form taken, by estimating the joint…
Jenkinson, Toni-Marie; Muncer, Steven; Wheeler, Miranda; Brechin, Don; Evans, Stephen
2018-06-01
Neuropsychological assessment requires accurate estimation of an individual's premorbid cognitive abilities. Oral word reading tests, such as the test of premorbid functioning (TOPF), and demographic variables, such as age, sex, and level of education, provide a reasonable indication of premorbid intelligence, but their ability to predict other related cognitive abilities is less well understood. This study aimed to develop regression equations, based on the TOPF and demographic variables, to predict scores on tests of verbal fluency and naming ability. A sample of 119 healthy adults provided demographic information and were tested using the TOPF, FAS, animal naming test (ANT), and graded naming test (GNT). Multiple regression analyses, using the TOPF and demographics as predictor variables, were used to estimate verbal fluency and naming ability test scores. Change scores and cases of significant impairment were calculated for two clinical samples with diagnosed neurological conditions (TBI and meningioma) using the method in Knight, McMahon, Green, and Skeaff (). Demographic variables provided a significant contribution to the prediction of all verbal fluency and naming ability test scores; however, adding TOPF score to the equation considerably improved prediction beyond that afforded by demographic variables alone. The percentage of variance accounted for by demographic variables and/or TOPF score varied from 19 per cent (FAS), 28 per cent (ANT), and 41 per cent (GNT). Change scores revealed significant differences in performance in the clinical groups, particularity the TBI group. Demographic variables, particularly education level, and scores on the TOPF should be taken into consideration when interpreting performance on tests of verbal fluency and naming ability. © 2017 The British Psychological Society.
Hays, Ron D; Revicki, Dennis A; Feeny, David; Fayers, Peter; Spritzer, Karen L; Cella, David
2016-10-01
Preference-based health-related quality of life (HR-QOL) scores are useful as outcome measures in clinical studies, for monitoring the health of populations, and for estimating quality-adjusted life-years. This was a secondary analysis of data collected in an internet survey as part of the Patient-Reported Outcomes Measurement Information System (PROMIS(®)) project. To estimate Health Utilities Index Mark 3 (HUI-3) preference scores, we used the ten PROMIS(®) global health items, the PROMIS-29 V2.0 single pain intensity item and seven multi-item scales (physical functioning, fatigue, pain interference, depressive symptoms, anxiety, ability to participate in social roles and activities, sleep disturbance), and the PROMIS-29 V2.0 items. Linear regression analyses were used to identify significant predictors, followed by simple linear equating to avoid regression to the mean. The regression models explained 48 % (global health items), 61 % (PROMIS-29 V2.0 scales), and 64 % (PROMIS-29 V2.0 items) of the variance in the HUI-3 preference score. Linear equated scores were similar to observed scores, although differences tended to be larger for older study participants. HUI-3 preference scores can be estimated from the PROMIS(®) global health items or PROMIS-29 V2.0. The estimated HUI-3 scores from the PROMIS(®) health measures can be used for economic applications and as a measure of overall HR-QOL in research.
Kojima, Shinya; Suzuki, Kazufumi; Hirata, Masami; Shinohara, Hiroyuki; Ueno, Eiko
2013-03-01
To assess the ability of magnetic resonance imaging (MRI) to depict the semicircular canals of the inner ear by comparing results from the sampling perfection with application-optimized contrasts by using different flip angle evolutions (SPACE) sequence with those from the true free induction with steady precession (TrueFISP) sequence. A 1.5-T MRI system was used to perform an in vivo study of 10 healthy volunteers and 17 patients. A three-point visual score was employed for assessing the depiction of the semicircular canals and facial and vestibulocochlear nerves and the contrast-to-noise ratio (CNR) was computed for the vestibule and pons on images with the SPACE and TrueFIPS sequences. There were no susceptibility artifact-related filling defects with the SPACE sequence. However, the TrueFISP sequence showed filling defects for at least one semicircular canal on both sides in seven cases for healthy subjects and in 10 cases for patients. The CNR with the SPACE sequence was significantly higher than with the TrueFISP sequence (P < 0.05). There was no statistically significant difference in depicting the facial and the vestibulocochlear nerves (P = 0.32). For the depiction of the semicircular canal, the SPACE sequence is superior to the TrueFISP sequence. Copyright © 2012 Wiley Periodicals, Inc.
12 CFR Appendix B to Subpart A of... - Conversion of Scorecard Measures into Score
Code of Federal Regulations, 2014 CFR
2014-01-01
... 327—Conversion of Scorecard Measures into Score 1. Weighted Average CAMELS Rating Weighted average CAMELS ratings between 1 and 3.5 are assigned a score between 25 and 100 according to the following equation: S = 25 + [(20/3) * (C 2 −1)], where: S = the weighted average CAMELS score; and C = the weighted...
12 CFR Appendix B to Subpart A of... - Conversion of Scorecard Measures into Score
Code of Federal Regulations, 2013 CFR
2013-01-01
... 327—Conversion of Scorecard Measures into Score 1. Weighted Average CAMELS Rating Weighted average CAMELS ratings between 1 and 3.5 are assigned a score between 25 and 100 according to the following equation: S = 25 + [(20/3) * (C 2 −1)], where: S = the weighted average CAMELS score; and C = the weighted...
12 CFR Appendix B to Subpart A of... - Conversion of Scorecard Measures into Score
Code of Federal Regulations, 2012 CFR
2012-01-01
... 327—Conversion of Scorecard Measures into Score 1. Weighted Average CAMELS Rating Weighted average CAMELS ratings between 1 and 3.5 are assigned a score between 25 and 100 according to the following equation: S = 25 + [(20/3) * (C 2 −1)], where: S = the weighted average CAMELS score; and C = the weighted...
Wu, Ching-yi; Chuang, Li-ling; Lin, Keh-chung; Lee, Shin-da; Hong, Wei-hsien
2011-08-01
To determine the responsiveness, minimal detectable change (MDC), and minimal clinically important differences (MCIDs) of the Nottingham Extended Activities of Daily Living (NEADL) scale and to assess percentages of patients' change scores exceeding the MDC and MCID after stroke rehabilitation. Secondary analyses of patients who received stroke rehabilitation therapy. Medical centers. Patients with stroke (N=78). Secondary analyses of patients who received 1 of 4 rehabilitation interventions. Responsiveness (standardized response mean [SRM]), 90% confidence that a change score at this threshold or higher is true and reliable rather than measurement error (MDC(90)), and MCID on the NEADL score and percentages of patients exceeding the MDC(90) and MCID. The SRM of the total NEADL scale was 1.3. The MDC(90) value for the total NEADL scale was 4.9, whereas minima and maxima of the MCID for total NEADL score were 2.4 and 6.1 points, respectively. Percentages of patients exceeding the MDC(90) and MCID of the total NEADL score were 50.0%, 73.1%, and 32.1%, respectively. The NEADL is a responsive instrument relevant for measuring change in instrumental activities of daily living after stroke rehabilitation. A patient's change score has to reach 4.9 points on the total to indicate a true change. The mean change score of a stroke group on the total NEADL scale should achieve 6.1 points to be regarded as clinically important. Our findings are based on patients with improved NEADL performance after they received specific interventions. Future research with larger sample sizes is warranted to validate these estimates. Copyright © 2011 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
O'Connor, Jean E; Coyle, Joseph; Bogue, Conor; Spence, Liam D; Last, Jason
2014-01-01
Age estimation in living subjects is primarily achieved through assessment of a hand-wrist radiograph and comparison with a standard reference atlas. Recently, maturation of other regions of the skeleton has also been assessed in an attempt to refine the age estimates. The current study presents a method to predict bone age directly from the knee in a modern Irish sample. Ten maturity indicators (A-J) at the knee were examined from radiographs of 221 subjects (137 males; 84 females). Each indicator was assigned a maturity score. Scores for indicators A-G, H-J and A-J, respectively, were totalled to provide a cumulative maturity score for change in morphology of the epiphyses (AG), epiphyseal union (HJ) and the combination of both (AJ). Linear regression equations to predict age from the maturity scores (AG, HJ, AJ) were constructed for males and females. For males, equation-AJ demonstrated the greatest predictive capability (R(2)=0.775) while for females equation-HJ had the strongest capacity for prediction (R(2)=0.815). When equation-AJ for males and equation-HJ for females were applied to the current sample, the predicted age of 90% of subjects was within ±1.5 years of actual age for male subjects and within +2.0 to -1.9 years of actual age for female subjects. The regression formulae and associated charts represent the most contemporary method of age prediction currently available for an Irish population, and provide a further technique which can contribute to a multifactorial approach to age estimation in non-adults. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.
Yamashita, Hiroshi
2013-01-01
The present paper reviews the theoretical and empirical literature on children and adolescents with gender identity disorder. The organizational framework underlying this review is one that presents gender behavior in children and adolescents as a continuum rather than as a dichotomy of normal versus abnormal categories. Theories of normative gender development, prevalence, assessment, developmental trajectories, and comorbidity were investigated. There is a greater fluidity and likelihood of change in the pre-pubertal period. It was reported that the majority of affected children had been eventually developing a homosexual orientation. As an approach to determine the prevalence of GID in clinical samples in our child psychiatry clinic, screening instruments that include items on cross-gender or cross-sex identification were used. We applied the Child Behavior Checklist (CBCL). Of the 113 items in the Japanese version of the CBCL, there are two measures of cross-gender identification: "behaves like opposite sex" and "wishes to be opposite sex." Like the other items, they are scored on a 3-point scale of: 0-not true, 1- somewhat true, and 2-very true. Our study of 323 clinically-referred children aged 4-15 years reported that, among the boys, 9.6% assigned a score of 1 (somewhat true) or a score of 2 (very true) to the two items. The corresponding rates for the clinically-referred girls were 24.5%. The item of diagnosis of GID in our clinical sample was significantly higher than in non-referred children, reported as 2-5% using the same method. Two clinical case histories of screened children are also presented. Both of them were diagnosed with PDDNOS. Together with the literature review, most of the gender-related symptoms in autistic spectrum disorders (ASD) could be related to the behavioral and psychological characteristics of autism as shown in case histories. ASD subjects in adolescence can sometimes develop a unique confusion of identity that occasionally exaggerates to gender-related problems. However, these views do not explain all cases; true comorbidity of ASD and GID should be considered. A full assessment including evaluation of the family, school, and social environment is essential as other emotional and behavioral problems are very common and unresolved issues in the child's environment are often present e. g., loss. Separation problems are particularly common in the younger group. Intervention should aim to assist development, particularly that of gender identity. It should focus on ameliorating the comorbid problems and difficulties in the child's life and reducing the distress experienced by the child.
Evaluating Equity at the Local Level Using Bootstrap Tests. Research Report 2016-4
ERIC Educational Resources Information Center
Kim, YoungKoung; DeCarlo, Lawrence T.
2016-01-01
Because of concerns about test security, different test forms are typically used across different testing occasions. As a result, equating is necessary in order to get scores from the different test forms that can be used interchangeably. In order to assure the quality of equating, multiple equating methods are often examined. Various equity…
ERIC Educational Resources Information Center
Moses, Tim
2008-01-01
Nine statistical strategies for selecting equating functions in an equivalent groups design were evaluated. The strategies of interest were likelihood ratio chi-square tests, regression tests, Kolmogorov-Smirnov tests, and significance tests for equated score differences. The most accurate strategies in the study were the likelihood ratio tests…
Exploring Equity Properties in Equating Using AP® Examinations. Research Report No. 2012-4
ERIC Educational Resources Information Center
Lee, Eunjung; Lee, Won-Chan; Brennan, Robert L.
2012-01-01
In almost all high-stakes testing programs, test equating is necessary to ensure that test scores across multiple test administrations are equivalent and can be used interchangeably. Test equating becomes even more challenging in mixed-format tests, such as Advanced Placement Program® (AP®) Exams, that contain both multiple-choice and constructed…
New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design
ERIC Educational Resources Information Center
von Davier, Alina A.
2008-01-01
The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear…
Collateral Information for Equating in Small Samples: A Preliminary Investigation
ERIC Educational Resources Information Center
Kim, Sooyeon; Livingston, Samuel A.; Lewis, Charles
2011-01-01
This article describes a preliminary investigation of an empirical Bayes (EB) procedure for using collateral information to improve equating of scores on test forms taken by small numbers of examinees. Resampling studies were done on two different forms of the same test. In each study, EB and non-EB versions of two equating methods--chained linear…
Shankar, Prasad R; Kaza, Ravi K; Al-Hawary, Mahmoud M; Masch, William R; Curci, Nicole E; Mendiratta-Lala, Mishal; Sakala, Michelle D; Johnson, Timothy D; Davenport, Matthew S
2018-04-17
Purpose To assess the impact of clinical history on the maximum Prostate Imaging Recording and Data System (PI-RADS) version 2 (v2) score assigned to multiparametric magnetic resonance (MR) imaging of the prostate. Materials and Methods This retrospective cohort study included 120 consecutively selected multiparametric prostate MR imaging studies performed between November 1, 2016, and December 31, 2016. Sham clinical data in four domains (digital rectal examination, prostate-specific antigen level, plan for biopsy, prior prostate cancer history) were randomly assigned to each case by using a balanced orthogonal design. Six fellowship-trained abdominal radiologists independently reviewed the sham data, actual patient age, and each examination while they were blinded to interreader scoring, true clinical data, and histologic findings. Readers were told the constant sham histories were true, believed the study to be primarily investigating interrater agreement, and were asked to assign a maximum PI-RADS v2 score to each case. Linear regression was performed to assess the association between clinical variables and maximum PI-RADS v2 score designation. Intraclass correlation coefficients (ICCs) were obtained to compare interreader scoring. Results Clinical information had no significant effect on maximum PI-RADS v2 scoring for any of the six readers (P = .09-.99, 42 reader-variable pairs). Distributions of maximum PI-RADS v2 scores in the research context were similar to the distribution of the scores assigned clinically and had fair-to-excellent pairwise interrater agreement (ICC range: 0.53-0.76). Overall interrater agreement was good (ICC: 0.64; 95% confidence interval: 0.57, 0.71). Conclusion Clinical history does not appear to be a substantial bias in maximum PI-RADS v2 score assignment. This is potentially important for clinical nomograms that plan to incorporate PI-RADS v2 score and clinical data into their algorithms (ie, PI-RADS v2 scoring is not confounded by clinical data). © RSNA, 2018 Online supplemental material is available for this article.
Carias, D; Cioccia, A M; Hevia, P
1995-06-01
Protein digestibility is a key factor in the determination of protein quality using the chemical score. Since there are several methods available for determining protein digestibility the purpose of this study was to compare three methods in vitro (pH drop, pH stat and pepsin digestibility) and two methods in vivo (true and apparent digestibility in rats) in the determination of the protein digestibility of: casein, soy protein isolate, fish meal, black beans, corn meal and wheat flour. The results showed that in the case of highly digestible proteins all methods agreed very well. However, this agreement was much less apparent in the case of protein with digestibilities below 85%. As a result, the chemical score of these proteins varied substantially depending upon the method used to determine its digestibility. Thus, when the chemical score of the proteins analyzed was corrected by the true protein digestibility measured in rats, they ranked as: casein 83.56, soy 76.11, corn-beans mixtures (1:1) 58.14, fish meal 55.25, black beans 47.93, corn meal 46.06 and wheat flour 32.77. In contrast, when the chemical score of these proteins was corrected by the pepsin digestibility method, the lowest quality was assigned to fish meal. In summary, this results pointed out that for non conventional proteins of for known proteins which have been subjected to processing, protein digestibility should be measured in vivo.
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M H; Babron, M C; Clerget-Darpoux, F
1996-06-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Emotional content enhances true but not false memory for categorized stimuli.
Choi, Hae-Yoon; Kensinger, Elizabeth A; Rajaram, Suparna
2013-04-01
Past research has shown that emotion enhances true memory, but that emotion can either increase or decrease false memory. Two theoretical possibilities-the distinctiveness of emotional stimuli and the conceptual relatedness of emotional content-have been implicated as being responsible for influencing both true and false memory for emotional content. In the present study, we sought to identify the mechanisms that underlie these mixed findings by equating the thematic relatedness of the study materials across each type of valence used (negative, positive, or neutral). In three experiments, categorically bound stimuli (e.g., funeral, pets, and office items) were used for this purpose. When the encoding task required the processing of thematic relatedness, a significant true-memory enhancement for emotional content emerged in recognition memory, but no emotional boost to false memory (exp. 1). This pattern persisted for true memory with a longer retention interval between study and test (24 h), and false recognition was reduced for emotional items (exp. 2). Finally, better recognition memory for emotional items once again emerged when the encoding task (arousal ratings) required the processing of the emotional aspect of the study items, with no emotional boost to false recognition (EXP. 3). Together, these findings suggest that when emotional and neutral stimuli are equivalently high in thematic relatedness, emotion continues to improve true memory, but it does not override other types of grouping to increase false memory.
Calculating the True and Observed Rates of Complex Heterogeneous Catalytic Reactions
NASA Astrophysics Data System (ADS)
Avetisov, A. K.; Zyskin, A. G.
2018-06-01
Equations of the theory of steady-state complex reactions are considered in matrix form. A set of stage stationarity equations is given, and an algorithm is described for deriving the canonic set of stationarity equations with appropriate corrections for the existence of fast stages in a mechanism. A formula for calculating the number of key compounds is presented. The applicability of the Gibbs rule to estimating the number of independent compounds in a complex reaction is analyzed. Some matrix equations relating the rates of dependent and key substances are derived. They are used as a basis to determine the general diffusion stoichiometry relationships between temperature, the concentrations of dependent reaction participants, and the concentrations of key reaction participants in a catalyst grain. An algorithm is described for calculating heat and mass transfer in a catalyst grain with respect to arbitrary complex heterogeneous catalytic reactions.
Visco-acoustic wave-equation traveltime inversion and its sensitivity to attenuation errors
NASA Astrophysics Data System (ADS)
Yu, Han; Chen, Yuqing; Hanafy, Sherif M.; Huang, Jiangping
2018-04-01
A visco-acoustic wave-equation traveltime inversion method is presented that inverts for the shallow subsurface velocity distribution. Similar to the classical wave equation traveltime inversion, this method finds the velocity model that minimizes the squared sum of the traveltime residuals. Even though, wave-equation traveltime inversion can partly avoid the cycle skipping problem, a good initial velocity model is required for the inversion to converge to a reasonable tomogram with different attenuation profiles. When Q model is far away from the real model, the final tomogram is very sensitive to the starting velocity model. Nevertheless, a minor or moderate perturbation of the Q model from the true one does not strongly affect the inversion if the low wavenumber information of the initial velocity model is mostly correct. These claims are validated with numerical tests on both the synthetic and field data sets.
NASA Astrophysics Data System (ADS)
Gambetta, Jay; Wiseman, H. M.
2002-07-01
Do stochastic Schrödinger equations, also known as unravelings, have a physical interpretation? In the Markovian limit, where the system on average obeys a master equation, the answer is yes. Markovian stochastic Schrödinger equations generate quantum trajectories for the system state conditioned on continuously monitoring the bath. For a given master equation, there are many different unravelings, corresponding to different sorts of measurement on the bath. In this paper we address the non-Markovian case, and in particular the sort of stochastic Schrödinger equation introduced by Strunz, Diósi, and Gisin [Phys. Rev. Lett. 82, 1801 (1999)]. Using a quantum-measurement theory approach, we rederive their unraveling that involves complex-valued Gaussian noise. We also derive an unraveling involving real-valued Gaussian noise. We show that in the Markovian limit, these two unravelings correspond to heterodyne and homodyne detection, respectively. Although we use quantum-measurement theory to define these unravelings, we conclude that the stochastic evolution of the system state is not a true quantum trajectory, as the identity of the state through time is a fiction.
The Reliability of Difference Scores in Populations and Samples
ERIC Educational Resources Information Center
Zimmerman, Donald W.
2009-01-01
This study was an investigation of the relation between the reliability of difference scores, considered as a parameter characterizing a population of examinees, and the reliability estimates obtained from random samples from the population. The parameters in familiar equations for the reliability of difference scores were redefined in such a way…
Quantum and electromagnetic propagation with the conjugate symmetric Lanczos method.
Acevedo, Ramiro; Lombardini, Richard; Turner, Matthew A; Kinsey, James L; Johnson, Bruce R
2008-02-14
The conjugate symmetric Lanczos (CSL) method is introduced for the solution of the time-dependent Schrodinger equation. This remarkably simple and efficient time-domain algorithm is a low-order polynomial expansion of the quantum propagator for time-independent Hamiltonians and derives from the time-reversal symmetry of the Schrodinger equation. The CSL algorithm gives forward solutions by simply complex conjugating backward polynomial expansion coefficients. Interestingly, the expansion coefficients are the same for each uniform time step, a fact that is only spoiled by basis incompleteness and finite precision. This is true for the Krylov basis and, with further investigation, is also found to be true for the Lanczos basis, important for efficient orthogonal projection-based algorithms. The CSL method errors roughly track those of the short iterative Lanczos method while requiring fewer matrix-vector products than the Chebyshev method. With the CSL method, only a few vectors need to be stored at a time, there is no need to estimate the Hamiltonian spectral range, and only matrix-vector and vector-vector products are required. Applications using localized wavelet bases are made to harmonic oscillator and anharmonic Morse oscillator systems as well as electrodynamic pulse propagation using the Hamiltonian form of Maxwell's equations. For gold with a Drude dielectric function, the latter is non-Hermitian, requiring consideration of corrections to the CSL algorithm.
Key Factors that Influence Recruiting Young Chinese Students
ERIC Educational Resources Information Center
Wang, Zhenmin
2007-01-01
The discussion in this paper is based on the assumption that international education is equated to recruiting and educating international students, even though its true concept goes far beyond this narrow understanding. The purpose of this research is to look at the key factors that influence recruiting young Chinese students, and make sure all…
Modeling Dynamic Functional Neuroimaging Data Using Structural Equation Modeling
ERIC Educational Resources Information Center
Price, Larry R.; Laird, Angela R.; Fox, Peter T.; Ingham, Roger J.
2009-01-01
The aims of this study were to present a method for developing a path analytic network model using data acquired from positron emission tomography. Regions of interest within the human brain were identified through quantitative activation likelihood estimation meta-analysis. Using this information, a "true" or population path model was then…
A Multilevel CFA-MTMM Model for Nested Structurally Different Methods
ERIC Educational Resources Information Center
Koch, Tobias; Schultze, Martin; Burrus, Jeremy; Roberts, Richard D.; Eid, Michael
2015-01-01
The numerous advantages of structural equation modeling (SEM) for the analysis of multitrait-multimethod (MTMM) data are well known. MTMM-SEMs allow researchers to explicitly model the measurement error, to examine the true convergent and discriminant validity of the given measures, and to relate external variables to the latent trait as well as…
ERIC Educational Resources Information Center
Pallone, Nathaniel J.; Hennessy, James J.; Voelbel, Gerald T.
1998-01-01
A scientifically sound methodology for identifying offenders about whose presence the community should be notified is demonstrated. A stepwise multiple regression was calculated among incarcerated pedophiles (N=52) including both psychological and legal data; a precision-weighted equation produced 90.4% "true positives." This methodology can be…
The Representational Status of Pretence: Evidence from Typical Development and Autism
ERIC Educational Resources Information Center
Jarrold, Christopher; Mansergh, Ruth; Whiting, Claire
2010-01-01
The question of whether understanding pretend play requires meta-representational skill was examined among typically developing children and individuals with autism. Participants were presented with closely equated true and false pretence trials in which they had to judge a protagonist's pretend reading of a situation, which either matched or…
Zhang, Xinyu; Cao, Jiguo; Carroll, Raymond J
2015-03-01
We consider model selection and estimation in a context where there are competing ordinary differential equation (ODE) models, and all the models are special cases of a "full" model. We propose a computationally inexpensive approach that employs statistical estimation of the full model, followed by a combination of a least squares approximation (LSA) and the adaptive Lasso. We show the resulting method, here called the LSA method, to be an (asymptotically) oracle model selection method. The finite sample performance of the proposed LSA method is investigated with Monte Carlo simulations, in which we examine the percentage of selecting true ODE models, the efficiency of the parameter estimation compared to simply using the full and true models, and coverage probabilities of the estimated confidence intervals for ODE parameters, all of which have satisfactory performances. Our method is also demonstrated by selecting the best predator-prey ODE to model a lynx and hare population dynamical system among some well-known and biologically interpretable ODE models. © 2014, The International Biometric Society.
Determining the Leaf Emissivity of Three Crops by Infrared Thermometry
Chen, Chiachung
2015-01-01
Plant temperature can provide important physiological information for crop management. Non-contact measurement with an infrared thermometer is useful for detecting leaf temperatures. In this study, a novel technique was developed to measure leaf emissivity using an infrared thermometer with an infrared sensor and a thermocouple wire. The measured values were transformed into true temperatures by calibration equations to improve the measurement accuracy. The relationship between two kinds of measurement temperatures and setting emissivities was derived as a model for calculating of true emissivity. The emissivities of leaves of three crops were calculated by the mathematical equation developed in this study. The mean emissivities were 0.9809, 0.9783, 0.981 and 0.9848 for Phalaenopsis mature and new leaves and Paphiopedilum and Malabar chestnut leaves, respectively. Emissivity differed significantly between leaves of Malabar chestnut and the two orchids. The range of emissivities determined in this study was similar to that in the literature. The precision of the measurement is acceptable. The method developed in this study is a real-time, in situ technique and could be used for agricultural and forestry plants. PMID:25988870
Zhang, Lei; Feng, Xiao; Wang, Xin; Liu, Changyong
2014-01-01
The nitrogen-containing austenitic stainless steel 316LN has been chosen as the material for nuclear main-pipe, which is one of the key parts in 3rd generation nuclear power plants. In this research, a constitutive model of nitrogen-containing austenitic stainless steel is developed. The true stress-true strain curves obtained from isothermal hot compression tests over a wide range of temperatures (900–1250°C) and strain rates (10−3–10 s−1), were employed to study the dynamic deformational behavior of and recrystallization in 316LN steels. The constitutive model is developed through multiple linear regressions performed on the experimental data and based on an Arrhenius-type equation and Zener-Hollomon theory. The influence of strain was incorporated in the developed constitutive equation by considering the effect of strain on the various material constants. The reliability and accuracy of the model is verified through the comparison of predicted flow stress curves and experimental curves. Possible reasons for deviation are also discussed based on the characteristics of modeling process. PMID:25375345
Knapp, M; Seuchter, S A; Baur, M P
1994-01-01
It is believed that the main advantage of affected sib-pair tests is that their application requires no information about the underlying genetic mechanism of the disease. However, here it is proved that the mean test, which can be considered the most prominent of the affected sib-pair tests, is equivalent to lod score analysis for an assumed recessive mode of inheritance, irrespective of the true mode of the disease. Further relationships of certain sib-pair tests and lod score analysis under specific assumed genetic modes are investigated.
MetaMQAP: a meta-server for the quality assessment of protein models.
Pawlowski, Marcin; Gajda, Michal J; Matlak, Ryszard; Bujnicki, Janusz M
2008-09-29
Computational models of protein structure are usually inaccurate and exhibit significant deviations from the true structure. The utility of models depends on the degree of these deviations. A number of predictive methods have been developed to discriminate between the globally incorrect and approximately correct models. However, only a few methods predict correctness of different parts of computational models. Several Model Quality Assessment Programs (MQAPs) have been developed to detect local inaccuracies in unrefined crystallographic models, but it is not known if they are useful for computational models, which usually exhibit different and much more severe errors. The ability to identify local errors in models was tested for eight MQAPs: VERIFY3D, PROSA, BALA, ANOLEA, PROVE, TUNE, REFINER, PROQRES on 8251 models from the CASP-5 and CASP-6 experiments, by calculating the Spearman's rank correlation coefficients between per-residue scores of these methods and local deviations between C-alpha atoms in the models vs. experimental structures. As a reference, we calculated the value of correlation between the local deviations and trivial features that can be calculated for each residue directly from the models, i.e. solvent accessibility, depth in the structure, and the number of local and non-local neighbours. We found that absolute correlations of scores returned by the MQAPs and local deviations were poor for all methods. In addition, scores of PROQRES and several other MQAPs strongly correlate with 'trivial' features. Therefore, we developed MetaMQAP, a meta-predictor based on a multivariate regression model, which uses scores of the above-mentioned methods, but in which trivial parameters are controlled. MetaMQAP predicts the absolute deviation (in Angströms) of individual C-alpha atoms between the model and the unknown true structure as well as global deviations (expressed as root mean square deviation and GDT_TS scores). Local model accuracy predicted by MetaMQAP shows an impressive correlation coefficient of 0.7 with true deviations from native structures, a significant improvement over all constituent primary MQAP scores. The global MetaMQAP score is correlated with model GDT_TS on the level of 0.89. Finally, we compared our method with the MQAPs that scored best in the 7th edition of CASP, using CASP7 server models (not included in the MetaMQAP training set) as the test data. In our benchmark, MetaMQAP is outperformed only by PCONS6 and method QA_556 - methods that require comparison of multiple alternative models and score each of them depending on its similarity to other models. MetaMQAP is however the best among methods capable of evaluating just single models. We implemented the MetaMQAP as a web server available for free use by all academic users at the URL https://genesilico.pl/toolkit/
Development and validation of a predictive equation for lean body mass in children and adolescents.
Foster, Bethany J; Platt, Robert W; Zemel, Babette S
2012-05-01
Lean body mass (LBM) is not easy to measure directly in the field or clinical setting. Equations to predict LBM from simple anthropometric measures, which account for the differing contributions of fat and lean to body weight at different ages and levels of adiposity, would be useful to both human biologists and clinicians. To develop and validate equations to predict LBM in children and adolescents across the entire range of the adiposity spectrum. Dual energy X-ray absorptiometry was used to measure LBM in 836 healthy children (437 females) and linear regression was used to develop sex-specific equations to estimate LBM from height, weight, age, body mass index (BMI) for age z-score and population ancestry. Equations were validated using bootstrapping methods and in a local independent sample of 332 children and in national data collected by NHANES. The mean difference between measured and predicted LBM was - 0.12% (95% limits of agreement - 11.3% to 8.5%) for males and - 0.14% ( - 11.9% to 10.9%) for females. Equations performed equally well across the entire adiposity spectrum, as estimated by BMI z-score. Validation indicated no over-fitting. LBM was predicted within 5% of measured LBM in the validation sample. The equations estimate LBM accurately from simple anthropometric measures.
A Case of Inconsistent Equatings: How the Man with Four Watches Decides What Time It Is
ERIC Educational Resources Information Center
Livingston, Samuel A.; Antal, Judit
2010-01-01
A simultaneous equating of four new test forms to each other and to one previous form was accomplished through a complex design incorporating seven separate equating links. Each new form was linked to the reference form by four different paths, and each path produced a different score conversion. The procedure used to resolve these inconsistencies…
Sound velocity in five-component air mixtures of various densities
NASA Astrophysics Data System (ADS)
Bogdanova, N. V.; Rydalevskaya, M. A.
2018-05-01
The local equilibrium flows of five-component air mixtures are considered. Gas dynamic equations are derived from the kinetic equations for aggregate values of collision invariants. It is shown that the traditional formula for sound velocity is true in air mixtures considered with the chemical reactions and the internal degrees of freedom. This formula connects the square of sound velocity with pressure and density. However, the adiabatic coefficient is not constant under existing conditions. The analytical expression for this coefficient is obtained. The examples of its calculation in air mixtures of various densities are presented.
NASA Technical Reports Server (NTRS)
Gullbrand, Jessica
2003-01-01
In this paper, turbulence-closure models are evaluated using the 'true' LES approach in turbulent channel flow. The study is an extension of the work presented by Gullbrand (2001), where fourth-order commutative filter functions are applied in three dimensions in a fourth-order finite-difference code. The true LES solution is the grid-independent solution to the filtered governing equations. The solution is obtained by keeping the filter width constant while the computational grid is refined. As the grid is refined, the solution converges towards the true LES solution. The true LES solution will depend on the filter width used, but will be independent of the grid resolution. In traditional LES, because the filter is implicit and directly connected to the grid spacing, the solution converges towards a direct numerical simulation (DNS) as the grid is refined, and not towards the solution of the filtered Navier-Stokes equations. The effect of turbulence-closure models is therefore difficult to determine in traditional LES because, as the grid is refined, more turbulence length scales are resolved and less influence from the models is expected. In contrast, in the true LES formulation, the explicit filter eliminates all scales that are smaller than the filter cutoff, regardless of the grid resolution. This ensures that the resolved length-scales do not vary as the grid resolution is changed. In true LES, the cell size must be smaller than or equal to the cutoff length scale of the filter function. The turbulence-closure models investigated are the dynamic Smagorinsky model (DSM), the dynamic mixed model (DMM), and the dynamic reconstruction model (DRM). These turbulence models were previously studied using two-dimensional explicit filtering in turbulent channel flow by Gullbrand & Chow (2002). The DSM by Germano et al. (1991) is used as the USFS model in all the simulations. This enables evaluation of different reconstruction models for the RSFS stresses. The DMM consists of the scale-similarity model (SSM) by Bardina et al. (1983), which is an RSFS model, in linear combination with the DSM. In the DRM, the RSFS stresses are modeled by using an estimate of the unfiltered velocity in the unclosed term, while the USFS stresses are modeled by the DSM. The DSM and the DMM are two commonly used turbulence-closure models, while the DRM is a more recent model.
Younes, Magdy; Kuna, Samuel T; Pack, Allan I; Walsh, James K; Kushida, Clete A; Staley, Bethany; Pien, Grace W
2018-02-15
The American Academy of Sleep Medicine has published manuals for scoring polysomnograms that recommend time spent in non-rapid eye movement sleep stages (stage N1, N2, and N3 sleep) be reported. Given the well-established large interrater variability in scoring stage N1 and N3 sleep, we determined the range of time in stage N1 and N3 sleep scored by a large number of technologists when compared to reasonably estimated true values. Polysomnograms of 70 females were scored by 10 highly trained sleep technologists, two each from five different academic sleep laboratories. Range and confidence interval (CI = difference between the 5th and 95th percentiles) of the 10 times spent in stage N1 and N3 sleep assigned in each polysomnogram were determined. Average values of times spent in stage N1 and N3 sleep generated by the 10 technologists in each polysomnogram were considered representative of the true values for the individual polysomnogram. Accuracy of different technologists in estimating delta wave duration was determined by comparing their scores to digitally determined durations. The CI range of the ten N1 scores was 4 to 39 percent of total sleep time (% TST) in different polysomnograms (mean CI ± standard deviation = 11.1 ± 7.1 % TST). Corresponding range for N3 was 1 to 28 % TST (14.4 ± 6.1 % TST). For stage N1 and N3 sleep, very low or very high values were reported for virtually all polysomnograms by different technologists. Technologists varied widely in their assignment of stage N3 sleep, scoring that stage when the digitally determined time of delta waves ranged from 3 to 17 seconds. Manual scoring of non-rapid eye movement sleep stages is highly unreliable among highly trained, experienced technologists. Measures of sleep continuity and depth that are reliable and clinically relevant should be a focus of clinical research. © 2018 American Academy of Sleep Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Noguera, Norman, E-mail: norman.noguera@ucr.ac.cr; Rózga, Krzysztof, E-mail: krzysztof.rozga@upr.edu
In this work, one provides a justification of the condition that is usually imposed on the parameters of the hypergeometric equation, related to the solutions of the stationary Schrödinger equation for the harmonic oscillator in two-dimensional constant curvature spaces, in order to determine the solutions which are square-integrable. One proves that in case of negative curvature, it is a necessary condition of square integrability and in case of positive curvature, a necessary condition of regularity. The proof is based on the analytic continuation formulas for the hypergeometric function. It is observed also that the same is true in case ofmore » a slightly more general potential than the one for harmonic oscillator.« less
ERIC Educational Resources Information Center
Kim, Hyung Jin; Brennan, Robert L.; Lee, Won-Chan
2017-01-01
In equating, when common items are internal and scoring is conducted in terms of the number of correct items, some pairs of total scores ("X") and common-item scores ("V") can never be observed in a bivariate distribution of "X" and "V"; these pairs are called "structural zeros." This simulation…
Sources of Score Scale Inconsistency. Research Report. ETS RR-11-10
ERIC Educational Resources Information Center
Haberman, Shelby J.; Dorans, Neil J.
2011-01-01
For testing programs that administer multiple forms within a year and across years, score equating is used to ensure that scores can be used interchangeably. In an ideal world, samples sizes are large and representative of populations that hardly change over time, and very reliable alternate test forms are built with nearly identical psychometric…
Prenatal Sonographic Predictors of Neonatal Coarctation of the Aorta.
Anuwutnavin, Sanitra; Satou, Gary; Chang, Ruey-Kang; DeVore, Greggory R; Abuel, Ashley; Sklansky, Mark
2016-11-01
To identify practical prenatal sonographic markers for the postnatal diagnosis of coarctation of the aorta. We reviewed the fetal echocardiograms and postnatal outcomes of fetal cases of suspected coarctation of the aorta seen at a single institution between 2010 and 2014. True- and false-positive cases were compared. Logistic regression analysis was used to determine echocardiographic predictors of coarctation of the aorta. Optimal cutoffs for these markers and a multivariable threshold scoring system were derived to discriminate fetuses with coarctation of the aorta from those without coarctation of the aorta. Among 35 patients with prenatal suspicion of coarctation of the aorta, the diagnosis was confirmed postnatally in 9 neonates (25.7% true-positive rate). Significant predictors identified from multivariate analysis were as follows: Z score for the ascending aorta diameter of -2 or less (P = < .001), Z score for the mitral valve annulus of -2 or less (P= .033), Zscore for the transverse aortic arch diameter of -2 or less (P= .028), and abnormal aortic valve morphologic features (P= .026). Among all variables studied, the ascending aortic Z score had the highest sensitivity (78%) and specificity (92%) for detection of coarctation of the aorta. A multivariable threshold scoring system identified fetuses with coarctation of the aorta with still greater sensitivity (89%) and only mildly decreased specificity (88%). The finding of a diminutive ascending aorta represents a powerful and practical prenatal predictor of neonatal coarctation of the aorta. A multivariable scoring system, including dimensions of the ascending and transverse aortas, mitral valve annulus, and morphologic features of the aortic valve, provides excellent sensitivity and specificity. The use of these practical sonographic markers may improve prenatal detection of coarctation of the aorta. © 2016 by the American Institute of Ultrasound in Medicine.
Sainz de Baranda, Pilar; Rodríguez-Iniesta, María; Ayala, Francisco; Santonja, Fernando; Cejudo, Antonio
2014-07-01
To examine the criterion-related validity of the horizontal hip joint angle (H-HJA) test and vertical hip joint angle (V-HJA) test for estimating hamstring flexibility measured through the passive straight-leg raise (PSLR) test using contemporary statistical measures. Validity study. Controlled laboratory environment. One hundred thirty-eight professional trampoline gymnasts (61 women and 77 men). Hamstring flexibility. Each participant performed 2 trials of H-HJA, V-HJA, and PSLR tests in a randomized order. The criterion-related validity of H-HJA and V-HJA tests was measured through the estimation equation, typical error of the estimate (TEEST), validity correlation (β), and their respective confidence limits. The findings from this study suggest that although H-HJA and V-HJA tests showed moderate to high validity scores for estimating hamstring flexibility (standardized TEEST = 0.63; β = 0.80), the TEEST statistic reported for both tests was not narrow enough for clinical purposes (H-HJA = 10.3 degrees; V-HJA = 9.5 degrees). Subsequently, the predicted likely thresholds for the true values that were generated were too wide (H-HJA = predicted value ± 13.2 degrees; V-HJA = predicted value ± 12.2 degrees). The results suggest that although the HJA test showed moderate to high validity scores for estimating hamstring flexibility, the prediction intervals between the HJA and PSLR tests are not strong enough to suggest that clinicians and sport medicine practitioners should use the HJA and PSLR tests interchangeably as gold standard measurement tools to evaluate and detect short hamstring muscle flexibility.
Perryman, K R; Masey O'Neill, H V; Bedford, M R; Dozier, W A
2016-05-01
An experiment utilizing 960 Ross × Ross 708 male broilers was conducted to determine the effects of Ca feeding strategy on true ileal (prececal) P digestibility (TIPD) and true P retention (TPR) of corn. Experimental diets were formulated with 1 of 3 dietary Ca feeding strategies (0.95%, 0.13%, or variable Ca concentrations to maintain a 2.1:1 Ca:P ratio) and contain 0, 25, 50, or 75% corn. A practical corn-soybean meal diet (1.4:1 Ca:P ratio) was fed as a control. After receiving a common starter diet, experimental diets were fed from 19 to 26 d of age. After a 48-h dietary adaptation period, a 48-h retention assay was conducted. At 25 and 26 d of age, ileal digesta were collected from 8 birds per cage. Broilers consuming the control diet had higher (P<0.001) BW gain, feed intake, digesta P, and excreta P than broilers consuming the corn titration diets. Digesta and excreta P increased (linear, P<0.05) with graded increases of corn. True ileal P digestibility and TPR were highest (P<0.05) for diets with 0.13% Ca (57.3 and 69.5%, respectively) compared with diets formulated with a 2.1:1 Ca:P ratio (41.2 and 37.8%, respectively) or 0.95% Ca (25.4 and 39.0%, respectively). Values for TPR were higher (P<0.05) than those for TIPD except when the dietary Ca:P ratio was fixed. Additionally, negative endogenous P losses were predicted by regression equations when TPR was estimated for birds fed titration diets with the fixed Ca:P ratio. Changing the Ca concentration of the diets to maintain a fixed Ca:P ratio influenced (P<0.001) apparent P retention, which affected the estimate for TPR due to the prediction of negative endogenous P losses. These data demonstrated that regression analysis may have limitations when estimating the TIPD or TPR of corn when formulating diets with different Ca feeding strategies. More research is necessary to elucidate the factors that contributed to regression equations predicting negative endogenous P losses. © 2016 Poultry Science Association Inc.
Soriano, Vincent V; Tesoro, Eljim P; Kane, Sean P
2017-08-01
The Winter-Tozer (WT) equation has been shown to reliably predict free phenytoin levels in healthy patients. In patients with end-stage renal disease (ESRD), phenytoin-albumin binding is altered and, thus, affects interpretation of total serum levels. Although an ESRD WT equation was historically proposed for this population, there is a lack of data evaluating its accuracy. The objective of this study was to determine the accuracy of the ESRD WT equation in predicting free serum phenytoin concentration in patients with ESRD on hemodialysis (HD). A retrospective analysis of adult patients with ESRD on HD and concurrent free and total phenytoin concentrations was conducted. Each patient's true free phenytoin concentration was compared with a calculated value using the ESRD WT equation and a revised version of the ESRD WT equation. A total of 21 patients were included for analysis. The ESRD WT equation produced a percentage error of 75% and a root mean square error of 1.76 µg/mL. Additionally, 67% of the samples had an error >50% when using the ESRD WT equation. A revised equation was found to have high predictive accuracy, with only 5% of the samples demonstrating >50% error. The ESRD WT equation was not accurate in predicting free phenytoin concentration in patients with ESRD on HD. A revised ESRD WT equation was found to be significantly more accurate. Given the small study sample, further studies are required to fully evaluate the clinical utility of the revised ESRD WT equation.
Preda, Adrian; Nguyen, Dana D; Bustillo, Juan R; Belger, Aysenil; O'Leary, Daniel S; McEwen, Sarah; Ling, Shichun; Faziola, Lawrence; Mathalon, Daniel H; Ford, Judith M; Potkin, Steven G; van Erp, Theo G M
2018-06-20
To provide quantitative conversions between commonly used scales for the assessment of negative symptoms in schizophrenia. Linear regression analyses generated conversion equations between symptom scores from the Scale for the Assessment of Negative Symptoms (SANS), the Schedule for the Deficit Syndrome (SDS), the Positive and Negative Syndrome Scale (PANSS), or the Negative Symptoms Assessment (NSA) based on a cross sectional sample of 176 individuals with schizophrenia. Intraclass correlations assessed the rating conversion accuracy based on a separate sub-sample of 29 patients who took part in the initial study as well as an independent sample of 28 additional subjects with schizophrenia. Between-scale negative symptom ratings were moderately to highly correlated (r = 0.73-0.91). Intraclass correlations between the original negative symptom rating scores and those obtained via using the conversion equations were in the range of 0.61-0.79. While there is a degree of non-overlap, several negative symptoms scores reflect measures of similar constructs and may be reliably converted between some scales. The conversion equations are provided at http://www.converteasy.org and may be used for meta- and mega-analyses that examine negative symptoms. Copyright © 2018 Elsevier B.V. All rights reserved.
Napoli, Anthony M
2014-04-01
Cardiology consensus guidelines recommend use of the Diamond and Forrester (D&F) score to augment the decision to pursue stress testing. However, recent work has reported no association between pretest probability of coronary artery disease (CAD) as measured by D&F and physician discretion in stress test utilization for inpatients. The author hypothesized that D&F pretest probability would predict the likelihood of acute coronary syndrome (ACS) and a positive stress test and that there would be limited yield to diagnostic testing of patients categorized as low pretest probability by D&F score who are admitted to a chest pain observation unit (CPU). This was a prospective observational cohort study of consecutively admitted CPU patients in a large-volume academic urban emergency department (ED). Cardiologists rounded on all patients and stress test utilization was driven by their recommendations. Inclusion criteria were as follows: age>18 years, American Heart Association (AHA) low/intermediate risk, nondynamic electrocardiograms (ECGs), and normal initial troponin I. Exclusion criteria were as follows: age older than 75 years with a history of CAD. A D&F score for likelihood of CAD was calculated on each patient independent of patient care. Based on the D&F score, patients were assigned a priori to low-, intermediate-, and high-risk groups (<10, 10 to 90, and >90%, respectively). ACS was defined by ischemia on stress test, coronary artery occlusion of ≥70% in at least one vessel, or elevations in troponin I consistent with consensus guidelines. A true-positive stress test was defined by evidence of reversible ischemia and subsequent angiographic evidence of critical stenosis or a discharge diagnosis of ACS. An estimated 3,500 patients would be necessary to have 1% precision around a potential 0.3% event rate in low-pretest-probability patients. Categorical comparisons were made using Pearson chi-square testing. A total of 3,552 patients with index visits were enrolled over a 29-month period. The mean (±standard deviation [SD]) age was 51.3 (±9.3) years. Forty-nine percent of patients received stress testing. Pretest probability based on D&F score was associated with stress test utilization (p<0.01), risk of ACS (p<0.01), and true-positive stress tests (p=0.03). No patients with low pretest probability were subsequently diagnosed with ACS (95% CI=0 to 0.66%) or had a true-positive stress test (95% CI=0 to 1.6%). Physician discretionary decision-making regarding stress test use is associated with pretest probability of CAD. However, based on the D&F score, low-pretest-probability patients who meet CPU admission criteria are very unlikely to have a true-positive stress test or eventually receive a diagnosis of ACS, such that observation and stress test utilization may be obviated. © 2014 by the Society for Academic Emergency Medicine.
Hsieh, Jui-Hua; Yin, Shuangye; Wang, Xiang S; Liu, Shubin; Dokholyan, Nikolay V; Tropsha, Alexander
2012-01-23
Poor performance of scoring functions is a well-known bottleneck in structure-based virtual screening (VS), which is most frequently manifested in the scoring functions' inability to discriminate between true ligands vs known nonbinders (therefore designated as binding decoys). This deficiency leads to a large number of false positive hits resulting from VS. We have hypothesized that filtering out or penalizing docking poses recognized as non-native (i.e., pose decoys) should improve the performance of VS in terms of improved identification of true binders. Using several concepts from the field of cheminformatics, we have developed a novel approach to identifying pose decoys from an ensemble of poses generated by computational docking procedures. We demonstrate that the use of target-specific pose (scoring) filter in combination with a physical force field-based scoring function (MedusaScore) leads to significant improvement of hit rates in VS studies for 12 of the 13 benchmark sets from the clustered version of the Database of Useful Decoys (DUD). This new hybrid scoring function outperforms several conventional structure-based scoring functions, including XSCORE::HMSCORE, ChemScore, PLP, and Chemgauss3, in 6 out of 13 data sets at early stage of VS (up 1% decoys of the screening database). We compare our hybrid method with several novel VS methods that were recently reported to have good performances on the same DUD data sets. We find that the retrieved ligands using our method are chemically more diverse in comparison with two ligand-based methods (FieldScreen and FLAP::LBX). We also compare our method with FLAP::RBLB, a high-performance VS method that also utilizes both the receptor and the cognate ligand structures. Interestingly, we find that the top ligands retrieved using our method are highly complementary to those retrieved using FLAP::RBLB, hinting effective directions for best VS applications. We suggest that this integrative VS approach combining cheminformatics and molecular mechanics methodologies may be applied to a broad variety of protein targets to improve the outcome of structure-based drug discovery studies.
40 CFR 88.204-94 - Sales requirements for the California Pilot Test Program.
Code of Federal Regulations, 2014 CFR
2014-07-01
... equation rounded to the nearest whole number: ER03JA96.003 Where: RMS = a manufacturer's required sales in... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Sales requirements for the California... Sales requirements for the California Pilot Test Program. (a) The total annual required minimum sales...
Upscaling from particle models to entropic gradient flows
NASA Astrophysics Data System (ADS)
Dirr, Nicolas; Laschos, Vaios; Zimmer, Johannes
2012-06-01
We prove that, for the case of Gaussians on the real line, the functional derived by a time discretization of the diffusion equation as entropic gradient flow is asymptotically equivalent to the rate functional derived from the underlying microscopic process. This result strengthens a conjecture that the same statement is actually true for all measures with second finite moment.
ERIC Educational Resources Information Center
Teston, George
2008-01-01
When asked about individual perceptions of "technology," 68% of Americans primarily equate the term to the computer. Although this perception under represents the true breadth of the field, the statistic does speak to the ubiquitous role the computer plays across many technology disciplines. Software has become the building block of all major…
Hirsch index and truth survival in clinical research.
Poynard, Thierry; Thabut, Dominique; Munteanu, Mona; Ratziu, Vlad; Benhamou, Yves; Deckmyn, Olivier
2010-08-06
Factors associated with the survival of truth of clinical conclusions in the medical literature are unknown. We hypothesized that publications with a first author having a higher Hirsch' index value (h-I), which quantifies and predicts an individual's scientific research output, should have a longer half-life. 474 original articles concerning cirrhosis or hepatitis published from 1945 to 1999 were selected. The survivals of the main conclusions were updated in 2009. The truth survival was assessed by time-dependent methods (Kaplan Meier method and Cox). A conclusion was considered to be true, obsolete or false when three or more observers out of the six stated it to be so. 284 out of 474 conclusions (60%) were still considered true, 90 (19%) were considered obsolete and 100 (21%) false. The median of the h-I was=24 (range 1-85). Authors with true conclusions had significantly higher h-I (median=28) than those with obsolete (h-I=19; P=0.002) or false conclusions (h-I=19; P=0.01). The factors associated (P<0.0001) with h-I were: scientific life (h-I=33 for>30 years vs. 16 for<30 years), -methodological quality score (h-I=36 for high vs. 20 for low scores), and -positive predictive value combining power, ratio of true to not-true relationships and bias (h-I=33 for high vs. 20 for low values). In multivariate analysis, the risk ratio of h-I was 1.003 (95%CI, 0.994-1.011), and was not significant (P=0.56). In a subgroup restricted to 111 articles with a negative conclusion, we observed a significant independent prognostic value of h-I (risk ratio=1.033; 95%CI, 1.008-1.059; P=0.009). Using an extrapolation of h-I at the time of article publication there was a significant and independent prognostic value of baseline h-I (risk ratio=0.027; P=0.0001). The present study failed to clearly demonstrate that the h-index of authors was a prognostic factor for truth survival. However the h-index was associated with true conclusions, methodological quality of trials and positive predictive values.
Undergraduate paramedic students cannot do drug calculations.
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they 'did not have any drug calculations issues'. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment.
Conclusion of LOD-score analysis for family data generated under two-locus models.
Dizier, M. H.; Babron, M. C.; Clerget-Darpoux, F.
1996-01-01
The power to detect linkage by the LOD-score method is investigated here for diseases that depend on the effects of two genes. The classical strategy is, first, to detect a major-gene (MG) effect by segregation analysis and, second, to seek for linkage with genetic markers by the LOD-score method using the MG parameters. We already showed that segregation analysis can lead to evidence for a MG effect for many two-locus models, with the estimates of the MG parameters being very different from those of the two genes involved in the disease. We show here that use of these MG parameter estimates in the LOD-score analysis may lead to a failure to detect linkage for some two-locus models. For these models, use of the sib-pair method gives a non-negligible increase of power to detect linkage. The linkage-homogeneity test among subsamples differing for the familial disease distribution provides evidence of parameter misspecification, when the MG parameters are used. Moreover, for most of the models, use of the MG parameters in LOD-score analysis leads to a large bias in estimation of the recombination fraction and sometimes also to a rejection of linkage for the true recombination fraction. A final important point is that a strong evidence of an MG effect, obtained by segregation analysis, does not necessarily imply that linkage will be detected for at least one of the two genes, even with the true parameters and with a close informative marker. PMID:8651311
He, Hua; McDermott, Michael P.
2012-01-01
Sensitivity and specificity are common measures of the accuracy of a diagnostic test. The usual estimators of these quantities are unbiased if data on the diagnostic test result and the true disease status are obtained from all subjects in an appropriately selected sample. In some studies, verification of the true disease status is performed only for a subset of subjects, possibly depending on the result of the diagnostic test and other characteristics of the subjects. Estimators of sensitivity and specificity based on this subset of subjects are typically biased; this is known as verification bias. Methods have been proposed to correct verification bias under the assumption that the missing data on disease status are missing at random (MAR), that is, the probability of missingness depends on the true (missing) disease status only through the test result and observed covariate information. When some of the covariates are continuous, or the number of covariates is relatively large, the existing methods require parametric models for the probability of disease or the probability of verification (given the test result and covariates), and hence are subject to model misspecification. We propose a new method for correcting verification bias based on the propensity score, defined as the predicted probability of verification given the test result and observed covariates. This is estimated separately for those with positive and negative test results. The new method classifies the verified sample into several subsamples that have homogeneous propensity scores and allows correction for verification bias. Simulation studies demonstrate that the new estimators are more robust to model misspecification than existing methods, but still perform well when the models for the probability of disease and probability of verification are correctly specified. PMID:21856650
Avoiding and Correcting Bias in Score-Based Latent Variable Regression with Discrete Manifest Items
ERIC Educational Resources Information Center
Lu, Irene R. R.; Thomas, D. Roland
2008-01-01
This article considers models involving a single structural equation with latent explanatory and/or latent dependent variables where discrete items are used to measure the latent variables. Our primary focus is the use of scores as proxies for the latent variables and carrying out ordinary least squares (OLS) regression on such scores to estimate…
Aligning Scales of Certification Tests. Research Report. ETS RR-10-07
ERIC Educational Resources Information Center
Dorans, Neil J.; Liang, Longjuan; Puhan, Gautam
2010-01-01
Scores are the most visible and widely used products of a testing program. The choice of score scale has implications for test specifications, equating, and test reliability and validity, as well as for test interpretation. At the same time, the score scale should be viewed as infrastructure likely to require repair at some point. In this report…
GFR Estimation: From Physiology to Public Health
Levey, Andrew S.; Inker, Lesley A.; Coresh, Josef
2014-01-01
Estimating glomerular filtration rate (GFR) is essential for clinical practice, research, and public health. Appropriate interpretation of estimated GFR (eGFR) requires understanding the principles of physiology, laboratory medicine, epidemiology and biostatistics used in the development and validation of GFR estimating equations. Equations developed in diverse populations are less biased at higher GFR than equations developed in CKD populations and are more appropriate for general use. Equations that include multiple endogenous filtration markers are more precise than equations including a single filtration marker. The Chronic Kidney Disease Epidemiology Collaboration (CKD-EPI) equations are the most accurate GFR estimating equations that have been evaluated in large, diverse populations and are applicable for general clinical use. The 2009 CKD-EPI creatinine equation is more accurate in estimating GFR and prognosis than the 2006 Modification of Diet in Renal Disease (MDRD) Study equation and provides lower estimates of prevalence of decreased eGFR. It is useful as a “first” test for decreased eGFR and should replace the MDRD Study equation for routine reporting of serum creatinine–based eGFR by clinical laboratories. The 2012 CKD-EPI cystatin C equation is as accurate as the 2009 CKD-EPI creatinine equation in estimating eGFR, does not require specification of race, and may be more accurate in patients with decreased muscle mass. The 2012 CKD-EPI creatinine–cystatin C equation is more accurate than the 2009 CKD-EPI creatinine and 2012 CKD-EPI cystatin C equations and is useful as a confirmatory test for decreased eGFR as determined by an equation based on serum creatinine. Further improvement in GFR estimating equations will require development in more broadly representative populations, including diverse racial and ethnic groups, use of multiple filtration markers, and evaluation using statistical techniques to compare eGFR to “true GFR”. PMID:24485147
Four Bootstrap Confidence Intervals for the Binomial-Error Model.
ERIC Educational Resources Information Center
Lin, Miao-Hsiang; Hsiung, Chao A.
1992-01-01
Four bootstrap methods are identified for constructing confidence intervals for the binomial-error model. The extent to which similar results are obtained and the theoretical foundation of each method and its relevance and ranges of modeling the true score uncertainty are discussed. (SLD)
Noninvasive Uterine Electromyography For Prediction of Preterm Delivery*
UCOVNIK, Miha L; MANER, William L.; CHAMBLISS, Linda R.; BLUMRICK, Richard; BALDUCCI, James; NOVAK-ANTOLIC, Ziva; GARFIELD, Robert E.
2011-01-01
Objective Power spectrum (PS) of uterine electromyography (EMG) can identify true labor. EMG propagation velocity (PV) to diagnose labor has not been reported. The objective was to compare uterine EMG against current methods to predict preterm delivery. Study design EMG was recorded in 116 patients (preterm labor, n=20; preterm non-labor, n=68; term labor, n=22; term non-labor, n=6). Student’s t-test was used to compare EMG values for labor vs. non-labor (P<0.05 significant). Predictive values of EMG, Bishop-score, contractions on tocogram, and transvaginal cervical length were calculated using receiver-operator-characteristics analysis. Results PV was higher in preterm and term labor compared with non-labor (P<0.001). Combined PV and PS peak frequency predicted preterm delivery within 7 days with area-under-the-curve (AUC) = 0.96. Bishop score, contractions, and cervical length had AUC of 0.72, 0.67, and 0.54. Conclusions Uterine EMG PV and PS peak frequency more accurately identify true preterm labor than clinical methods. PMID:21145033
An Entropy-Based Approach to Nonlinear Stability
NASA Technical Reports Server (NTRS)
Merriam, Marshal L.
1989-01-01
Many numerical methods used in computational fluid dynamics (CFD) incorporate an artificial dissipation term to suppress spurious oscillations and control nonlinear instabilities. The same effect can be accomplished by using upwind techniques, sometimes augmented with limiters to form Total Variation Diminishing (TVD) schemes. An analysis based on numerical satisfaction of the second law of thermodynamics allows many such methods to be compared and improved upon. A nonlinear stability proof is given for discrete scalar equations arising from a conservation law. Solutions to such equations are bounded in the L sub 2 norm if the second law of thermodynamics is satisfied in a global sense over a periodic domain. It is conjectured that an analogous statement is true for discrete equations arising from systems of conservation laws. Analysis and numerical experiments suggest that a more restrictive condition, a positive entropy production rate in each cell, is sufficient to exclude unphysical phenomena such as oscillations and expansion shocks. Construction of schemes which satisfy this condition is demonstrated for linear and nonlinear wave equations and for the one-dimensional Euler equations.
Till, Andrew T.; Warsa, James S.; Morel, Jim E.
2018-06-15
The thermal radiative transfer (TRT) equations comprise a radiation equation coupled to the material internal energy equation. Linearization of these equations produces effective, thermally-redistributed scattering through absorption-reemission. In this paper, we investigate the effectiveness and efficiency of Linear-Multi-Frequency-Grey (LMFG) acceleration that has been reformulated for use as a preconditioner to Krylov iterative solution methods. We introduce two general frameworks, the scalar flux formulation (SFF) and the absorption rate formulation (ARF), and investigate their iterative properties in the absence and presence of true scattering. SFF has a group-dependent state size but may be formulated without inner iterations in the presence ofmore » scattering, while ARF has a group-independent state size but requires inner iterations when scattering is present. We compare and evaluate the computational cost and efficiency of LMFG applied to these two formulations using a direct solver for the preconditioners. Finally, this work is novel because the use of LMFG for the radiation transport equation, in conjunction with Krylov methods, involves special considerations not required for radiation diffusion.« less
Oblate-Earth Effects on the Calculation of Ec During Spacecraft Reentry
NASA Technical Reports Server (NTRS)
Bacon, John B.; Matney, Mark J.
2017-01-01
The bulge in the Earth at its equator has been shown to lead to a clustering of natural decays biased to occur towards the equator and away from the orbit's extreme latitudes. Such clustering must be considered when predicting the Expectation of Casualty (Ec) during a natural decay because of the clustering of the human population in the same lower latitudes. This study expands upon prior work, and formalizes the correction that must be made to the calculation of the average exposed population density as a result of this effect. Although a generic equation can be derived from this work to approximate the effects of gravitational and atmospheric perturbations on a final decay, such an equation averages certain important subtleties in achieving a best fit over all conditions. The authors recommend that direct simulation be used to calculate the true Ec for any specific entry as a more accurate method. A generic equation is provided, represented as a function of ballistic number and inclination of the entering spacecraft over the credible range of ballistic numbers.
A fast and well-conditioned spectral method for singular integral equations
NASA Astrophysics Data System (ADS)
Slevinsky, Richard Mikael; Olver, Sheehan
2017-03-01
We develop a spectral method for solving univariate singular integral equations over unions of intervals by utilizing Chebyshev and ultraspherical polynomials to reformulate the equations as almost-banded infinite-dimensional systems. This is accomplished by utilizing low rank approximations for sparse representations of the bivariate kernels. The resulting system can be solved in O (m2 n) operations using an adaptive QR factorization, where m is the bandwidth and n is the optimal number of unknowns needed to resolve the true solution. The complexity is reduced to O (mn) operations by pre-caching the QR factorization when the same operator is used for multiple right-hand sides. Stability is proved by showing that the resulting linear operator can be diagonally preconditioned to be a compact perturbation of the identity. Applications considered include the Faraday cage, and acoustic scattering for the Helmholtz and gravity Helmholtz equations, including spectrally accurate numerical evaluation of the far- and near-field solution. The JULIA software package SingularIntegralEquations.jl implements our method with a convenient, user-friendly interface.
NASA Astrophysics Data System (ADS)
Albert, Julian; Hader, Kilian; Engel, Volker
2017-12-01
It is commonly assumed that the time-dependent electron flux calculated within the Born-Oppenheimer (BO) approximation vanishes. This is not necessarily true if the flux is directly determined from the continuity equation obeyed by the electron density. This finding is illustrated for a one-dimensional model of coupled electronic-nuclear dynamics. There, the BO flux is in perfect agreement with the one calculated from a solution of the time-dependent Schrödinger equation for the coupled motion. A reflection principle is derived where the nuclear BO flux is mapped onto the electronic flux.
Hromadka, T.V.; Guymon, G.L.
1985-01-01
An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.
NASA Astrophysics Data System (ADS)
Li, Jianping; Xia, Xiangsheng
2015-09-01
In order to improve the understanding of the hot deformation and dynamic recrystallization (DRX) behaviors of large-scaled AZ80 magnesium alloy fabricated by semi-continuous casting, compression tests were carried out in the temperature range from 250 to 400 °C and strain rate range from 0.001 to 0.1 s-1 on a Gleeble 1500 thermo-mechanical machine. The effects of the temperature and strain rate on the hot deformation behavior have been expressed by means of the conventional hyperbolic sine equation, and the influence of the strain has been incorporated in the equation by considering its effect on different material constants for large-scaled AZ80 magnesium alloy. In addition, the DRX behavior has been discussed. The result shows that the deformation temperature and strain rate exerted remarkable influences on the flow stress. The constitutive equation of large-scaled AZ80 magnesium alloy for hot deformation at steady-state stage (ɛ = 0.5) was The true stress-true strain curves predicted by the extracted model were in good agreement with the experimental results, thereby confirming the validity of the developed constitutive relation. The DRX kinetic model of large-scaled AZ80 magnesium alloy was established as X d = 1 - exp[-0.95((ɛ - ɛc)/ɛ*)2.4904]. The rate of DRX increases with increasing deformation temperature, and high temperature is beneficial for achieving complete DRX in the large-scaled AZ80 magnesium alloy.
Chen, Cheng-Yi; Pan, Chi-Feng; Wu, Chih-Jen; Chen, Han-Hsiang; Chen, Yu-Wei
2014-07-01
The prognosis of critically ill patients with cirrhosis is poor. Our aim was to identify an objective variable that can improve the prognostic value of the Model of End-Stage Liver Disease (MELD) score in patients who have cirrhosis and are admitted to the intensive care unit (ICU). This retrospective cohort study included 177 patients who had liver cirrhosis and were admitted to the ICU. Data pertaining to arterial blood gas-related parameters and other variables were obtained on the day of ICU admission. The overall ICU mortality rate was 36.2%. The bicarbonate (HCO3) level was found to be an independent predictor of ICU mortality (odds ratio, 2.3; 95% confidence interval [CI], 1.0-4.8; p = 0.038). A new equation was constructed (MELD-Bicarbonate) by replacing total bilirubin by HCO3 in the original MELD score. The area under the receiver operating characteristic curve for predicting ICU mortality was 0.76 (95% CI, 0.69-0.84) for the MELD-Bicarbonate equation, 0.73 (95% CI, 0.65-0.81) for the MELD score, and 0.71 (95% CI, 0.63-0.80) for the Acute Physiology and Chronic Health Evaluation II score. Bicarbonate level assessment, as an objective and reproducible laboratory test, has significant predictive value in critically ill patients with cirrhosis. In contrast, the predictive value of total bilirubin is not as prominent in this setting. The MELD-Bicarbonate equation, which included three variables (international normalized ratio, creatinine level, and HCO3 level), showed better prognostic value than the original MELD score in critically ill patients with cirrhosis.
An examination of the rheology of flocculated clay suspensions
NASA Astrophysics Data System (ADS)
Spearman, Jeremy
2017-04-01
A dense cohesive sediment suspension, sometimes referred to as fluid mud, is a thixotropic fluid with a true yield stress. Current rheological formulations struggle to reconcile the structural dynamics of cohesive sediment suspensions with the equilibrium behaviour of these suspensions across the range of concentrations and shear. This paper is concerned with establishing a rheological framework for the range of sediment concentrations from the yield point to Newtonian flow. The shear stress equation is based on floc fractal theory, put forward by Mills and Snabre (1988). This results in a Casson-like rheology equation. Additional structural dynamics is then added, using a theory on the self-similarity of clay suspensions proposed by Coussot (1995), giving an equation which has the ability to match the equilibrium and time-dependent viscous rheology of a wide range of suspensions of different concentration and mineralogy.
Villodre, Celia; Rebasa, Pere; Estrada, José Luís; Zaragoza, Carmen; Zapater, Pedro; Mena, Luís; Lluís, Félix
2016-11-01
In a previous study, we found that Physiological and Operative Severity Score for the enUmeration of Mortality and Morbidity (POSSUM) overpredicts morbidity risk in emergency gastrointestinal surgery. Our aim was to find a POSSUM equation adjustment. A prospective observational study was performed on 2,361 patients presenting with a community-acquired gastrointestinal surgical emergency. The first 1,000 surgeries constituted the development cohort, the second 1,000 events were the first validation intramural cohort, and the remaining 361 cases belonged to a second validation extramural cohort. (1) A modified POSSUM equation was obtained. (2) Logistic regression was used to yield a statistically significant equation that included age, hemoglobin, white cell count, sodium and operative severity. (3) A chi-square automatic interaction detector decision tree analysis yielded a statistically significant equation with 4 variables, namely cardiac failure, sodium, operative severity, and peritoneal soiling. A modified POSSUM equation and a simplified scoring system (aLicante sUrgical Community Emergencies New Tool for the enUmeration of Morbidities [LUCENTUM]) are described. Both tools significantly improve prediction of surgical morbidity in community-acquired gastrointestinal surgical emergencies. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Ethnic differences in the Goodenough-Harris draw-a-man and draw-a-woman tests.
Dugdale, A E; Chen, S T
1979-11-01
The draw-a-man (DAM) and draw-a-woman (DAW) tests were given to 307 schoolchildren in Petaling Jaya, Malaysia. The children were ethnically Malay, Chinese, or Indian (Tamil), and all came from lower socioeconomic groups. The standard scores of the Chinese children averaged 118 in the DAM and 112 in the DAW tests. These scores were significantly better than the American standards. Malay children scored significantly lower than Chinese, and Tamil children scored lower again. The nutritional status of the children had no influence on the scores. Chinese and Tamil children scored better in the DAM than the DAW, while in Malay boys the reverse was true. Malay children tended to emphasise clothing in the DAM, but Chinese and Tamil children scored better on items relating to facial features and body proportions. The Goodenough-Harris draw-a-person tests are obviously not culture-free, but the causes of ethnic differences have not been elucidated.
NASA Astrophysics Data System (ADS)
Small, Michael
2015-12-01
Mean field compartmental models of disease transmission have been successfully applied to a host of different scenarios, and the Kermack-McKendrick equations are now a staple of mathematical biology text books. In Susceptible-Infected-Removed format these equations provide three coupled first order ordinary differential equations with a very mild nonlinearity and they are very well understood. However, underpinning these equations are two important assumptions: that the population is (a) homogeneous, and (b) well-mixed. These assumptions become closest to being true for diseases infecting a large portion of the population for which inevitable individual effects can be averaged away. Emerging infectious disease (such as, in recent times, SARS, avian influenza, swine flu and ebola) typically does not conform to this scenario. Individual contacts and peculiarities of the transmission network play a vital role in understanding the dynamics of such relatively rare infections - particularly during the early stages of an outbreak.
Maintaining Equivalent Cut Scores for Small Sample Test Forms
ERIC Educational Resources Information Center
Dwyer, Andrew C.
2016-01-01
This study examines the effectiveness of three approaches for maintaining equivalent performance standards across test forms with small samples: (1) common-item equating, (2) resetting the standard, and (3) rescaling the standard. Rescaling the standard (i.e., applying common-item equating methodology to standard setting ratings to account for…
Comparison of total body water estimates from O-18 and bioelectrical response prediction equations
NASA Technical Reports Server (NTRS)
Barrows, Linda H.; Inners, L. Daniel; Stricklin, Marcella D.; Klein, Peter D.; Wong, William W.; Siconolfi, Steven F.
1993-01-01
Identification of an indirect, rapid means to measure total body water (TBW) during space flight may aid in quantifying hydration status and assist in countermeasure development. Bioelectrical response testing and hydrostatic weighing were performed on 27 subjects who ingested O-18, a naturally occurring isotope of oxygen, to measure true TBW. TBW estimates from three bioelectrical response prediction equations and fat-free mass (FFM) were compared to TBW measured from O-18. A repeated measures MANOVA with post-hoc Dunnett's Test indicated a significant (p less than 0.05) difference between TBW estimates from two of the three bioelectrical response prediction equations and O-18. TBW estimates from FFM and the Kushner & Schoeller (1986) equation yielded results that were similar to those given by O-18. Strong correlations existed between each prediction method and O-18; however, standard errors, identified through regression analyses, were higher for the bioelectrical response prediction equations compared to those derived from FFM. These findings suggest (1) the Kushner & Schoeller (1986) equation may provide a valid measure of TBW, (2) other TBW prediction equations need to be identified that have variability similar to that of FFM, and (3) bioelectrical estimates of TBW may prove valuable in quantifying hydration status during space flight.
Counterintuitive Behaviour of a Particle under the Action of an Oscillating Force
ERIC Educational Resources Information Center
Mohazzabi, Pirooz; Greenebaum, Ben
2011-01-01
When a free particle initially at rest is acted on by an oscillating force, it is intuitively expected to oscillate in place with the frequency of the force. However, careful solution of the classical equation of motion shows that this is only true for particular initial phases of the force; otherwise a steady drift is superimposed on the…
The Centrifugal Simulation of Blast Parameters.
1983-12-01
a is to be experimentally evaluated. The terms that remain in Equation (1) are not nondimensional; that is, they are not true i-terms. This does not...If necessary and identify by block number This study is concerned with the use oT a centrifuge as an experimental device on which free-field blast...5 SIMILITUDE. .. .. ..... ...... ...... ... 9 Ill. EXPERIMENTAL PROCEDURES .. .. ... ...... ........13 INTRODUCTION
Son, H S; Hong, Y S; Park, W M; Yu, M A; Lee, C H
2009-03-01
To estimate true Brix and alcoholic strength of must and wines without distillation, a novel approach using a refractometer and a hydrometer was developed. Initial Brix (I.B.), apparent refractometer Brix (A.R.), and apparent hydrometer Brix (A.H.) of must were measured by refractometer and hydrometer, respectively. Alcohol content (A) was determined with a hydrometer after distillation and true Brix (T.B.) was measured in distilled wines using a refractometer. Strong proportional correlations among A.R., A.H., T.B., and A in sugar solutions containing varying alcohol concentrations were observed in preliminary experiments. Similar proportional relationships among the parameters were also observed in must, which is a far more complex system than the sugar solution. To estimate T.B. and A of must during alcoholic fermentation, a total of 6 planar equations were empirically derived from the relationships among the experimental parameters. The empirical equations were then tested to estimate T.B. and A in 17 wine products, and resulted in good estimations of both quality factors. This novel approach was rapid, easy, and practical for use in routine analyses or for monitoring quality of must during fermentation and final wine products in a winery and/or laboratory.
Kim, Ryul; Kim, Han-Joon; Kim, Aryun; Jang, Mi-Hee; Kim, Hyun Jeong; Jeon, Beomseok
2018-01-01
Objective Two conversion tables between the Mini-Mental State Examination (MMSE) and Montreal Cognitive Assessment (MoCA) have recently been established for Parkinson’s disease (PD). This study aimed to validate them in Korean patients with PD and to evaluate whether they could be influenced by educational level. Methods A total of 391 patients with PD who undertook both the Korean MMSE and the Korean MoCA during the same session were retrospectively assessed. The mean, median, and root mean squared error (RMSE) of the difference between the true and converted MMSE scores and the intraclass correlation coefficient (ICC) were calculated according to educational level (6 or fewer years, 7–12 years, or 13 or more years). Results Both conversions had a median value of 0, with a small mean and RMSE of differences, and a high correlation between the true and converted MMSE scores. In the classification according to educational level, all groups had roughly similar values of the median, mean, RMSE, and ICC both within and between the conversions. Conclusion Our findings suggest that both MMSE-MoCA conversion tables are useful instruments for transforming MoCA scores into converted MMSE scores in Korean patients with PD, regardless of educational level. These will greatly enhance the utility of the existing cognitive data from the Korean PD population in clinical and research settings. PMID:29316782
Fatigue Assessment: Subjective Peer-to-Peer Fatigue Scoring (Reprint)
2013-10-01
impor- tant role of fatigue in aviation safety and fl ight perfor- mance, the compelling tasks of predicting dangerous fatigued states and quantifying ... risk and associated per- formance defi cits have been immoderately diffi cult. This is particularly true for an individual functioning within the
ERIC Educational Resources Information Center
Schneider, Jack; Feldman, Joe; French, Dan
2016-01-01
Relying on teachers' assessments for the information currently provided by standardized test scores would save instructional time, better capture the true abilities of diverse students, and reduce the problem of teaching to the test. A California high school is implementing standards-based reporting, ensuring that teacher-issued grades function as…
Cluster Stability Estimation Based on a Minimal Spanning Trees Approach
NASA Astrophysics Data System (ADS)
Volkovich, Zeev (Vladimir); Barzily, Zeev; Weber, Gerhard-Wilhelm; Toledano-Kitai, Dvora
2009-08-01
Among the areas of data and text mining which are employed today in science, economy and technology, clustering theory serves as a preprocessing step in the data analyzing. However, there are many open questions still waiting for a theoretical and practical treatment, e.g., the problem of determining the true number of clusters has not been satisfactorily solved. In the current paper, this problem is addressed by the cluster stability approach. For several possible numbers of clusters we estimate the stability of partitions obtained from clustering of samples. Partitions are considered consistent if their clusters are stable. Clusters validity is measured as the total number of edges, in the clusters' minimal spanning trees, connecting points from different samples. Actually, we use the Friedman and Rafsky two sample test statistic. The homogeneity hypothesis, of well mingled samples within the clusters, leads to asymptotic normal distribution of the considered statistic. Resting upon this fact, the standard score of the mentioned edges quantity is set, and the partition quality is represented by the worst cluster corresponding to the minimal standard score value. It is natural to expect that the true number of clusters can be characterized by the empirical distribution having the shortest left tail. The proposed methodology sequentially creates the described value distribution and estimates its left-asymmetry. Numerical experiments, presented in the paper, demonstrate the ability of the approach to detect the true number of clusters.
Accurate indel prediction using paired-end short reads
2013-01-01
Background One of the major open challenges in next generation sequencing (NGS) is the accurate identification of structural variants such as insertions and deletions (indels). Current methods for indel calling assign scores to different types of evidence or counter-evidence for the presence of an indel, such as the number of split read alignments spanning the boundaries of a deletion candidate or reads that map within a putative deletion. Candidates with a score above a manually defined threshold are then predicted to be true indels. As a consequence, structural variants detected in this manner contain many false positives. Results Here, we present a machine learning based method which is able to discover and distinguish true from false indel candidates in order to reduce the false positive rate. Our method identifies indel candidates using a discriminative classifier based on features of split read alignment profiles and trained on true and false indel candidates that were validated by Sanger sequencing. We demonstrate the usefulness of our method with paired-end Illumina reads from 80 genomes of the first phase of the 1001 Genomes Project ( http://www.1001genomes.org) in Arabidopsis thaliana. Conclusion In this work we show that indel classification is a necessary step to reduce the number of false positive candidates. We demonstrate that missing classification may lead to spurious biological interpretations. The software is available at: http://agkb.is.tuebingen.mpg.de/Forschung/SV-M/. PMID:23442375
Setyonugroho, Winny; Kropmans, Thomas; Murphy, Ruth; Hayes, Peter; van Dalen, Jan; Kennedy, Kieran M
2018-01-01
Comparing outcome of clinical skills assessment is challenging. This study proposes reliable and valid comparison of communication skills (1) assessment as practiced in Objective Structured Clinical Examinations (2). The aim of the present study is to compare CS assessment, as standardized according to the MAAS Global, between stations in a single undergraduate medical year. An OSCE delivered in an Irish undergraduate curriculum was studied. We chose the MAAS-Global as an internationally recognized and validated instrument to calibrate the OSCE station items. The MAAS-Global proportion is the percentage of station checklist items that can be considered as 'true' CS. The reliability of the OSCE was calculated with G-Theory analysis and nested ANOVA was used to compare mean scores of all years. MAAS-Global scores in psychiatry stations were significantly higher than those in other disciplines (p<0.03) and above the initial pass mark of 50%. The higher students' scores in psychiatry stations were related to higher MAAS-Global proportions when compared to the general practice stations. Comparison of outcome measurements, using the MAAS Global as a standardization instrument, between interdisciplinary station checklists was valid and reliable. The MAAS-Global was used as a single validated instrument and is suggested as gold standard. Copyright © 2017. Published by Elsevier B.V.
Mapping of Synaptic-Neuronal Impairment on the Brain Surface through Fluctuation Analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Musha, Toshimitsu; Kurachi, Takayoshi; Suzuki, Naohoro
2005-08-25
Increase of demented population year by year is becoming a serious social problem to be solved urgently. The most effective way to block this increase is in its early detection by means of an inexpensive, non-invasive, sensitive, reliable and easy-to-operate diagnosis method. We have developed a method satisfying these requirements by using scalp potential fluctuations. We have collected 21ch EEG and SPECT data of 25 very mild Alzheimer's disease (AD) (MMSE=26{+-}1.8), moderately severe AD (MMSE=15.3{+-}6.4) and age-matched normal controls. As AD progresses, local synaptic-neuronal activity becomes abnormal, either more unstable or more inactive than in normal state. Such abnormality ismore » detected in terms of normalized power variance (NPV) of a scalp potential recorded with a scalp electrode. The z-score is defined by z = ((NPV of a subject) - (mean NPV of normal subjects))/(standard deviation of NPV of normal subjects). Correlation of a measured z-score map with the mean z-score map for AD patients characterizes likelihood to AD, in terms of which AD is discriminated from normal with 75% of true positive and 25% false negative probability. By introducing two thresholds, we have 90% of true positive and 10% of false negative discrimination.« less
Saitone, T L; Sexton, R J; Sexton Ward, A
2018-01-01
The Affordable Care Act (ACA) established the Hospital-Acquired Condition (HAC) Reduction Program. The Centers for Medicare and Medicaid Services (CMS) established a total HAC scoring methodology to rank hospitals based upon their HAC performance. Hospitals that rank in the lowest quartile based on their HAC score are subject to a 1% reduction in their total Medicare reimbursements. In FY 2017, 769 hospitals incurred payment reductions totaling $430 million. This study analyzes how improvements in the rate of catheter-associated urinary tract infections (CAUTI), based on the implementation of a cranberry-treatment regimen, impact hospitals' HAC scores and likelihood of avoiding the Medicare-reimbursement penalty. A simulation model is developed and implemented using public data from the CMS' Hospital Compare website to determine how hospitals' unilateral and simultaneous adoption of cranberry to improve CAUTI outcomes can affect HAC scores and the likelihood of a hospital incurring the Medicare payment reduction, given results on cranberry effectiveness in preventing CAUTI based on scientific trials. The simulation framework can be adapted to consider other initiatives to improve hospitals' HAC scores. Nearly all simulated hospitals improved their overall HAC score by adopting cranberry as a CAUTI preventative, assuming mean effectiveness from scientific trials. Many hospitals with HAC scores in the lowest quartile of the HAC-score distribution and subject to Medicare reimbursement reductions can improve their scores sufficiently through adopting a cranberry-treatment regimen to avoid payment reduction. The study was unable to replicate exactly the data used by CMS to establish HAC scores for FY 2018. The study assumes that hospitals subject to the Medicare payment reduction were not using cranberry as a prophylactic treatment for their catheterized patients, but is unable to confirm that this is true in all cases. The study also assumes that hospitalized catheter patients would be able to consume cranberry in either juice or capsule form, but this may not be true in 100% of cases. Most hospitals can improve their HAC scores and many can avoid Medicare reimbursement reductions if they are able to attain a percentage reduction in CAUTI comparable to that documented for cranberry-treatment regimes in the existing literature.
Undergraduate paramedic students cannot do drug calculations
Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett
2012-01-01
BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067
Simulation of Charged Systems in Heterogeneous Dielectric Media via a True Energy Functional
NASA Astrophysics Data System (ADS)
Jadhao, Vikram; Solis, Francisco J.; de la Cruz, Monica Olvera
2012-11-01
For charged systems in heterogeneous dielectric media, a key obstacle for molecular dynamics (MD) simulations is the need to solve the Poisson equation in the media. This obstacle can be bypassed using MD methods that treat the local polarization charge density as a dynamic variable, but such approaches require access to a true free energy functional, one that evaluates to the equilibrium electrostatic energy at its minimum. In this Letter, we derive the needed functional. As an application, we develop a Car-Parrinello MD method for the simulation of free charges present near a spherical emulsion droplet separating two immiscible liquids with different dielectric constants. Our results show the presence of nonmonotonic ionic profiles in the dielectric with a lower dielectric constant.
Tebbe, A W; Faulkner, M J; Weiss, W P
2017-08-01
Many nutrition models rely on summative equations to estimate feed and diet energy concentrations. These models partition feed into nutrient fractions and multiply the fractions by their estimated true digestibility, and the digestible mass provided by each fraction is then summed and converted to an energy value. Nonfiber carbohydrate (NFC) is used in many models. Although it behaves as a nutritionally uniform fraction, it is a heterogeneous mixture of components. To reduce the heterogeneity, we partitioned NFC into starch and residual organic matter (ROM), which is calculated as 100 - CP - LCFA - ash - starch - NDF, where crude protein (CP), long-chain fatty acids (LCFA), ash, starch, and neutral detergent fiber (NDF) are a percentage of DM. However, the true digestibility of ROM is unknown, and because NDF is contaminated with both ash and CP, those components are subtracted twice. The effect of ash and CP contamination of NDF on in vivo digestibility of NDF and ROM was evaluated using data from 2 total-collection digestibility experiments using lactating dairy cows. Digestibility of NDF was greater when it was corrected for ash and CP than without correction. Conversely, ROM apparent digestibility decreased when NDF was corrected for contamination. Although correcting for contamination statistically increased NDF digestibility, the effect was small; the average increase was 3.4%. The decrease in ROM digestibility was 7.4%. True digestibility of ROM is needed to incorporate ROM into summative equations. Data from multiple digestibility experiments (38 diets) using dairy cows were collated, and ROM concentrations were regressed on concentration of digestible ROM (ROM was calculated without adjusting for ash and CP contamination). The estimated true digestibility coefficient of ROM was 0.96 (SE = 0.021), and metabolic fecal ROM was 3.43 g/100 g of dry matter intake (SE = 0.30). Using a smaller data set (7 diets), estimated true digestibility of ROM when calculated using NDF corrected for ash and CP contamination was 0.87 (SE = 0.025), and metabolic fecal ROM was 3.76 g/100 g (SE = 0.60). Regardless of NDF method, ROM exhibited nutritional uniformity. The ROM fraction also had lower errors associated with the estimated true digestibility and its metabolic fecal fraction than did NFC. Therefore, ROM may result in more accurate estimates of available energy if integrated into models. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
Snoad, Christian; Nagel, Corey; Bhattacharya, Animesh; Thomas, Evan
2017-01-01
The use of sanitary inspections combined with periodic water quality testing has been recommended in some cases as screening tools for fecal contamination. We conducted sanitary inspections and tested for thermotolerant coliforms (TTCs), a fecal indicator bacteria, among 7,317 unique water sources in West Bengal, India. Our results indicate that the sanitary inspection score has poor ability to identify TTC-contaminated sources. Among deep and shallow hand pumps, the area under curve (AUC) for prediction of TTC > 0 was 0.58 (95% confidence interval [CI] = 0.53–0.61) and 0.58 (95% CI = 0.54–0.62), respectively, indicating that the sanitary inspection score was only marginally better than chance in discriminating between contaminated and uncontaminated sources of this type. A slightly higher AUC value of 0.64 (95% CI=0.57–0.71) was observed when the sanitary inspection score was used for prediction of TTC > 0 among the gravity-fed piped sources. Among unprotected springs (AUC = 0.48, 95% CI = 0.38–0.55) and unprotected dug wells (AUC = 0.41, 95% CI = 0.20–0.66), the sanitary inspection score performed more poorly than chance in discriminating between sites with TTC < 1 and TTC > 0. Aggregating over all source types, the sensitivity (true positive rate) of a high/very high sanitary inspection score for TTC contamination (TTC > 1 CFU/100 mL) was 29.4% and the specificity (true negative rate) was 77.9%, resulting in substantial misclassification of the sites when using the established risk categories. These findings suggest that sanitary surveys are inappropriate screening tools for identifying TTC contamination at water points. PMID:28115676
Callens, Etienne; Graba, Sémia; Essalhi, Mohamed; Gillet-Juvin, Karine; Chevalier-Bidaud, Brigitte; Chenu, Romain; Mahut, Bruno; Delclaux, Christophe
2014-09-01
The first objective of our study was to assess whether patients diagnosed with cardio-respiratory disorders report overestimation or underestimation on recall (Medical Research Council (MRC) dyspnea scale) of their true functional capacity (walked distance during a 6-minute walk test (6MWT)). The second objective was to assess whether the measurement of breathlessness at the end of a 6MWT (Borg score) may help to identify dyspneic patients on recall. The 6MWTs of 746 patients aged from 40 to 80 years who were diagnosed with either chronic obstructive pulmonary disease (COPD, n = 355), diffuse parenchymal lung disease (n = 140), pulmonary vascular diseases (n = 188) or congestive heart failure (n = 63) were selected from a prospective Clinical Database Warehouse. The percentage of patients who overestimated (MRC ≤ 2 with distance < lower limit of normal (LLN), 61/746, 8%; 95% confidence interval (CI): 6 to 10%) or underestimated (MRC > 2 with distance ≥LLN, 121/746, 16%; 95%CI: 14 to 19%) on recall their capacity was elevated. The overestimation seemed related to self-limitation, while the underestimation seemed related to patients who "work through" their breathing discomfort. These two latter groups of patients were mainly diagnosed with COPD. A Borg dyspnea score >3 (upper limit of normal) at the end of the 6MWT had 84% specificity for the prediction of a MRC score >1. Almost one fourth of patients suffering from cardio-pulmonary disorders overestimate or underestimate on recall their true functional capacity. An elevated Borg dyspnea score at the end of the 6MWT has a good specificity to predict dyspnea on recall.
Snoad, Christian; Nagel, Corey; Bhattacharya, Animesh; Thomas, Evan
2017-04-01
AbstractThe use of sanitary inspections combined with periodic water quality testing has been recommended in some cases as screening tools for fecal contamination. We conducted sanitary inspections and tested for thermotolerant coliforms (TTCs), a fecal indicator bacteria, among 7,317 unique water sources in West Bengal, India. Our results indicate that the sanitary inspection score has poor ability to identify TTC-contaminated sources. Among deep and shallow hand pumps, the area under curve (AUC) for prediction of TTC > 0 was 0.58 (95% confidence interval [CI] = 0.53-0.61) and 0.58 (95% CI = 0.54-0.62), respectively, indicating that the sanitary inspection score was only marginally better than chance in discriminating between contaminated and uncontaminated sources of this type. A slightly higher AUC value of 0.64 (95% CI=0.57-0.71) was observed when the sanitary inspection score was used for prediction of TTC > 0 among the gravity-fed piped sources. Among unprotected springs (AUC = 0.48, 95% CI = 0.38-0.55) and unprotected dug wells (AUC = 0.41, 95% CI = 0.20-0.66), the sanitary inspection score performed more poorly than chance in discriminating between sites with TTC < 1 and TTC > 0. Aggregating over all source types, the sensitivity (true positive rate) of a high/very high sanitary inspection score for TTC contamination (TTC > 1 CFU/100 mL) was 29.4% and the specificity (true negative rate) was 77.9%, resulting in substantial misclassification of the sites when using the established risk categories. These findings suggest that sanitary surveys are inappropriate screening tools for identifying TTC contamination at water points.
Dong, Hong-ba; Yang, Yan-wen; Wang, Ying; Hong, Li
2012-11-01
Energy metabolism of critically ill children has its own characteristics, especially for those undergoing mechanical ventilation. We tried to assess the energy expenditure status and evaluate the use of predictive equations in such children. Moreover, the characteristics of the energy metabolism among various situation were explored. Fifty critically ill children undergoing mechanical ventilation were selected in this study. Data produced during the 24 hours of mechanical ventilation were collected for computation of severity of illness. Resting energy expenditure (REE) was determined at 24 hours after mechanical ventilation (MREE). Predictive resting energy expenditure (PREE) was calculated for each subject using age-appropriate equations (Schofield-HTWT, White). The study was approved by the hospital medical ethics committee and obtained parental written informed consent. The pediatric risk of mortality score 3 (PRISM3) and pediatric critical illness score (PCIS) were (7 ± 3) and (82 ± 4), respectively. MREE, Schofield-HTWT equation PREE and White equation PREE were (404.80 ± 178.28), (462.82 ± 160.38) and (427.97 ± 152.30) kcal/d, respectively; 70% were hypometabolic and 10% were hypermetabolic. MREE and PREE which were calculated using Schofield-HTWT equation and White equation, both were higher than MREE (P = 0.029). Correlation analysis was performed between PRISM3 and PCIS with MREE. There were no statistically significant correlation (P > 0.05). The hypometabolic response is apparent in critically ill children with mechanical ventilation; Schofield-HTWT equation and White equation could not predict energy requirements within acceptable clinical accuracy. In critically ill children undergoing mechanical ventilation, the energy expenditure is not correlated with the severity of illness.
A template-finding algorithm and a comprehensive benchmark for homology modeling of proteins
Vallat, Brinda Kizhakke; Pillardy, Jaroslaw; Elber, Ron
2010-01-01
The first step in homology modeling is to identify a template protein for the target sequence. The template structure is used in later phases of the calculation to construct an atomically detailed model for the target. We have built from the Protein Data Bank a large-scale learning set that includes tens of millions of pair matches that can be either a true template or a false one. Discriminatory learning (learning from positive and negative examples) is employed to train a decision tree. Each branch of the tree is a mathematical programming model. The decision tree is tested on an independent set from PDB entries and on the sequences of CASP7. It provides significant enrichment of true templates (between 50-100 percent) when compared to PSI-BLAST. The model is further verified by building atomically detailed structures for each of the tentative true templates with modeller. The probability that a true match does not yield an acceptable structural model (within 6Å RMSD from the native structure), decays linearly as a function of the TM structural-alignment score. PMID:18300226
LD Score Regression Distinguishes Confounding from Polygenicity in Genome-Wide Association Studies
Bulik-Sullivan, Brendan K.; Loh, Po-Ru; Finucane, Hilary; Ripke, Stephan; Yang, Jian; Patterson, Nick; Daly, Mark J.; Price, Alkes L.; Neale, Benjamin M.
2015-01-01
Both polygenicity (i.e., many small genetic effects) and confounding biases, such as cryptic relatedness and population stratification, can yield an inflated distribution of test statistics in genome-wide association studies (GWAS). However, current methods cannot distinguish between inflation from true polygenic signal and bias. We have developed an approach, LD Score regression, that quantifies the contribution of each by examining the relationship between test statistics and linkage disequilibrium (LD). The LD Score regression intercept can be used to estimate a more powerful and accurate correction factor than genomic control. We find strong evidence that polygenicity accounts for the majority of test statistic inflation in many GWAS of large sample size. PMID:25642630
Do Personality Scale Items Function Differently in People with High and Low IQ?
ERIC Educational Resources Information Center
Waiyavutti, Chakadee; Johnson, Wendy; Deary, Ian J.
2012-01-01
Intelligence differences might contribute to true differences in personality traits. It is also possible that intelligence might contribute to differences in understanding and interpreting personality items. Previous studies have not distinguished clearly between these possibilities. Before it can be accepted that scale score differences actually…
Vygotsky's Zone of Proximal Development: Implications for Gifted Education.
ERIC Educational Resources Information Center
Shaughnessy, Michael F.
This paper reviews Lev Vygotsky's theories concerning optimizing of potential through assistance, support, or instruction. The paper notes that there is a "zone of proximal development" or a band around intelligence quotient (IQ) scores reflecting one's true potential. IQ tests are generally well-standardized and "static,"…
Estimation of Graded Response Model Parameters Using MULTILOG.
ERIC Educational Resources Information Center
Baker, Frank B.
1997-01-01
Describes an idiosyncracy of the MULTILOG (D. Thissen, 1991) parameter estimation process discovered during a simulation study involving the graded response model. A misordering reflected in boundary function location parameter estimates resulted in a large negative contribution to the true score followed by a large positive contribution. These…
Test Design Project: Studies in Test Adequacy. Annual Report.
ERIC Educational Resources Information Center
Wilcox, Rand R.
These studies in test adequacy focus on two problems: procedures for estimating reliability, and techniques for identifying ineffective distractors. Fourteen papers are presented on recent advances in measuring achievement (a response to Molenaar); "an extension of the Dirichlet-multinomial model that allows true score and guessing to be…
The Earthquake Information Test: Validating an Instrument for Determining Student Misconceptions.
ERIC Educational Resources Information Center
Ross, Katharyn E. K.; Shuell, Thomas J.
Some pre-instructional misconceptions held by children can persist through scientific instruction and resist changes. Identifying these misconceptions would be beneficial for science instruction. In this preliminary study, scores on a 60-item true-false test of knowledge and misconceptions about earthquakes were compared with previous interview…
Guidelines for Interpreting and Reporting Subscores
ERIC Educational Resources Information Center
Feinberg, Richard A.; Jurich, Daniel P.
2017-01-01
Recent research has proposed a criterion to evaluate the reportability of subscores. This criterion is a value-added ratio ("VAR"), where values greater than 1 suggest that the true subscore is better approximated by the observed subscore than by the total score. This research extends the existing literature by quantifying statistical…
Stature estimation equations for South Asian skeletons based on DXA scans of contemporary adults.
Pomeroy, Emma; Mushrif-Tripathy, Veena; Wells, Jonathan C K; Kulkarni, Bharati; Kinra, Sanjay; Stock, Jay T
2018-05-03
Stature estimation from the skeleton is a classic anthropological problem, and recent years have seen the proliferation of population-specific regression equations. Many rely on the anatomical reconstruction of stature from archaeological skeletons to derive regression equations based on long bone lengths, but this requires a collection with very good preservation. In some regions, for example, South Asia, typical environmental conditions preclude the sufficient preservation of skeletal remains. Large-scale epidemiological studies that include medical imaging of the skeleton by techniques such as dual-energy X-ray absorptiometry (DXA) offer new potential datasets for developing such equations. We derived estimation equations based on known height and bone lengths measured from DXA scans from the Andhra Pradesh Children and Parents Study (Hyderabad, India). Given debates on the most appropriate regression model to use, multiple methods were compared, and the performance of the equations was tested on a published skeletal dataset of individuals with known stature. The equations have standard errors of estimates and prediction errors similar to those derived using anatomical reconstruction or from cadaveric datasets. As measured by the number of significant differences between true and estimated stature, and the prediction errors, the new equations perform as well as, and generally better than, published equations commonly used on South Asian skeletons or based on Indian cadaveric datasets. This study demonstrates the utility of DXA scans as a data source for developing stature estimation equations and offer a new set of equations for use with South Asian datasets. © 2018 Wiley Periodicals, Inc.
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam; Walker, Michael
2013-01-01
In this study we investigated when an equating conversion line is problematic in terms of gaps and clumps. We suggest using the conditional standard error of measurement (CSEM) to measure the scale scores that are inappropriate in the overall raw-to-scale transformation.
Application of Exploratory Structural Equation Modeling to Evaluate the Academic Motivation Scale
ERIC Educational Resources Information Center
Guay, Frédéric; Morin, Alexandre J. S.; Litalien, David; Valois, Pierre; Vallerand, Robert J.
2015-01-01
In this research, the authors examined the construct validity of scores of the Academic Motivation Scale using exploratory structural equation modeling. Study 1 and Study 2 involved 1,416 college students and 4,498 high school students, respectively. First, results of both studies indicated that the factor structure tested with exploratory…
An Extension of IRT-Based Equating to the Dichotomous Testlet Response Theory Model
ERIC Educational Resources Information Center
Tao, Wei; Cao, Yi
2016-01-01
Current procedures for equating number-correct scores using traditional item response theory (IRT) methods assume local independence. However, when tests are constructed using testlets, one concern is the violation of the local item independence assumption. The testlet response theory (TRT) model is one way to accommodate local item dependence.…
Meta-Analytic Structural Equation Modeling: A Two-Stage Approach
ERIC Educational Resources Information Center
Cheung, Mike W. L.; Chan, Wai
2005-01-01
To synthesize studies that use structural equation modeling (SEM), researchers usually use Pearson correlations (univariate r), Fisher z scores (univariate z), or generalized least squares (GLS) to combine the correlation matrices. The pooled correlation matrix is then analyzed by the use of SEM. Questionable inferences may occur for these ad hoc…
Using Kernel Equating to Assess Item Order Effects on Test Scores
ERIC Educational Resources Information Center
Moses, Tim; Yang, Wen-Ling; Wilson, Christine
2007-01-01
This study explored the use of kernel equating for integrating and extending two procedures proposed for assessing item order effects in test forms that have been administered to randomly equivalent groups. When these procedures are used together, they can provide complementary information about the extent to which item order effects impact test…
ERIC Educational Resources Information Center
Moses, Tim; Holland, Paul W.
2010-01-01
In this study, eight statistical strategies were evaluated for selecting the parameterizations of loglinear models for smoothing the bivariate test score distributions used in nonequivalent groups with anchor test (NEAT) equating. Four of the strategies were based on significance tests of chi-square statistics (Likelihood Ratio, Pearson,…
ERIC Educational Resources Information Center
Steed, Teneka C.
2013-01-01
Evaluating the psychometric properties of a newly developed instrument is critical to understanding how well an instrument measures what it intends to measure, and ensuring proposed use and interpretation of questionnaire scores are valid. The current study uses Structural Equation Modeling (SEM) techniques to examine the factorial structure and…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Enghauser, Michael
2015-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Tomita, Hirofumi; Masugi, Yohei; Hoshino, Ken; Fuchimoto, Yasushi; Fujino, Akihiro; Shimojima, Naoki; Ebinuma, Hirotoshi; Saito, Hidetsugu; Sakamoto, Michiie; Kuroda, Tatsuo
2014-06-01
Although liver fibrosis is an important predictor of outcomes for biliary atresia (BA), postsurgical native liver histology has not been well reported. Here, we retrospectively evaluated postsurgical native liver histology, and developed and assessed a novel scoring system - the BA liver fibrosis (BALF) score for non-invasively predicting liver fibrosis grades. We identified 259 native liver specimens from 91 BA patients. Of these, 180 specimens, obtained from 62 patients aged ≥1 year at examination, were used to develop the BALF scoring system. The BALF score equation was determined according to the prediction of histological fibrosis grades by multivariate ordered logistic regression analysis. The diagnostic powers of the BALF score and several non-invasive markers were assessed by area under the receiver operating characteristic curve (AUROC) analyses. Natural logarithms of the serum total bilirubin, γ-glutamyltransferase, and albumin levels, and age were selected as significantly independent variables for the BALF score equation. The BALF score had a good diagnostic power (AUROCs=0.86-0.94, p<0.001) and good diagnostic accuracy (79.4-93.3%) for each fibrosis grade. The BALF score revealed a strong correlation with fibrosis grade (r=0.77, p<0.001), and was the preferable non-invasive marker for diagnosing fibrosis grades ⩾F2. In a serial liver histology subgroup analysis, 7/15 patients exhibited liver fibrosis improvement with BALF scores being equivalent to histological fibrosis grades of F0-1. In postsurgical BA patients aged ⩾1year, the BALF score is a potential non-invasive marker of native liver fibrosis. Copyright © 2014 European Association for the Study of the Liver. Published by Elsevier B.V. All rights reserved.
Predictors of the pathogenicity of methicillin-resistant Staphylococcus aureus nosocomial pneumonia.
Nagaoka, Kentaro; Yanagihara, Katsunori; Harada, Yosuke; Yamada, Koichi; Migiyama, Yohei; Morinaga, Yoshitomo; Izumikawa, Koichi; Kakeya, Hiroshi; Yamamoto, Yoshihiro; Nishimura, Masaharu; Kohno, Shigeru
2014-05-01
The clinical characteristics of patients with nosocomial pneumonia (NP) associated with methicillin-resistant Staphylococcus aureus (MRSA) infection are not well characterized. Three hundred and thirty-seven consecutive patients with MRSA isolation from respiratory specimens who attended our hospital between April 2007 and March 2011 were enrolled. Patients characteristics diagnosed with 'true' MRSA-NP were described with regards to clinical, microbiological features, radiological features and genetic characteristics of the isolates. The diagnosis of 'true' MRSA-NP was confirmed by anti-MRSA treatment effects, Gram-staining or bronchoalveolar lavage fluid culture. Thirty-six patients were diagnosed with 'true' MRSA-NP, whereas 34 were diagnosed with NP with MRSA colonization. Patients with a MRSA-NP had a Pneumonia Patient Outcomes Research Team score of 5 (58.3% vs 23.5%), single cultivation of MRSA (83.3% vs 38.2%), MRSA quantitative cultivation yielding more than 10(6) CFU/mL (80.6% vs 47.1%), radiological findings other than lobar pneumonia (66.7% vs 26.5%), and a history of head, neck, oesophageal or stomach surgery (30.6% vs 11.8%). These factors were shown to be independent predictors of the pathogenicity of 'true' MRSA-NP by multivariate analysis (P < 0.05). 'True' MRSA-NP shows distinct clinical and radiological features from NP with MRSA colonization. © 2014 Asian Pacific Society of Respirology.
Lager, Anton CJ; Modin, Bitte E; De Stavola, Bianca L; Vågerö, Denny H
2012-01-01
Background Intelligence at a single time-point has been linked to health outcomes. An individual's IQ increases with longer schooling, but the validity of such increase is unclear. In this study, we assess the hypothesis that individual change in the performance on IQ tests between ages 10 and 20 years is associated with mortality later in life. Methods The analyses are based on a cohort of Swedish boys born in 1928 (n = 610) for whom social background data were collected in 1937, IQ tests were carried out in 1938 and 1948 and own education and mortality were recorded up to 2006. Structural equation models were used to estimate the extent to which two latent intelligence scores, at ages 10 and 20 years, manifested by results on the IQ tests, are related to paternal and own education, and how all these variables are linked to all-cause mortality. Results Intelligence at the age of 20 years was associated with lower mortality in adulthood, after controlling for intelligence at the age of 10 years. The increases in intelligence partly mediated the link between longer schooling and lower mortality. Social background differences in adult intelligence (and consequently in mortality) were partly explained by the tendency for sons of more educated fathers to receive longer schooling, even when initial intelligence levels had been accounted for. Conclusions The results are consistent with a causal link from change in intelligence to mortality, and further, that schooling-induced changes in IQ scores are true and bring about lasting changes in intelligence. In addition, if both these interpretations are correct, social differences in access to longer schooling have consequences for social differences in both adult intelligence and adult health. PMID:22493324
GalaxyDock BP2 score: a hybrid scoring function for accurate protein-ligand docking
NASA Astrophysics Data System (ADS)
Baek, Minkyung; Shin, Woong-Hee; Chung, Hwan Won; Seok, Chaok
2017-07-01
Protein-ligand docking is a useful tool for providing atomic-level understanding of protein functions in nature and design principles for artificial ligands or proteins with desired properties. The ability to identify the true binding pose of a ligand to a target protein among numerous possible candidate poses is an essential requirement for successful protein-ligand docking. Many previously developed docking scoring functions were trained to reproduce experimental binding affinities and were also used for scoring binding poses. However, in this study, we developed a new docking scoring function, called GalaxyDock BP2 Score, by directly training the scoring power of binding poses. This function is a hybrid of physics-based, empirical, and knowledge-based score terms that are balanced to strengthen the advantages of each component. The performance of the new scoring function exhibits significant improvement over existing scoring functions in decoy pose discrimination tests. In addition, when the score is used with the GalaxyDock2 protein-ligand docking program, it outperformed other state-of-the-art docking programs in docking tests on the Astex diverse set, the Cross2009 benchmark set, and the Astex non-native set. GalaxyDock BP2 Score and GalaxyDock2 with this score are freely available at http://galaxy.seoklab.org/softwares/galaxydock.html.
Berndl, K; von Cranach, M; Grüsser, O J
1986-01-01
The perception and recognition of faces, mimic expression and gestures were investigated in normal subjects and schizophrenic patients by means of a movie test described in a previous report (Berndl et al. 1986). The error scores were compared with results from a semi-quantitative evaluation of psychopathological symptoms and with some data from the case histories. The overall error scores found in the three groups of schizophrenic patients (paranoic, hebephrenic, schizo-affective) were significantly increased (7-fold) over those of normals. No significant difference in the distribution of the error scores in the three different patient groups was found. In 10 different sub-tests following the movie the deficiencies found in the schizophrenic patients were analysed in detail. The error score for the averbal test was on average higher in paranoic patients than in the two other groups of patients, while the opposite was true for the error scores found in the verbal tests. Age and sex had some impact on the test results. In normals, female subjects were somewhat better than male. In schizophrenic patients the reverse was true. Thus female patients were more affected by the disease than male patients with respect to the task performance. The correlation between duration of the disease and error score was small; less than 10% of the error scores could be attributed to factors related to the duration of illness. Evaluation of psychopathological symptoms indicated that the stronger the schizophrenic defect, the higher the error score, but again this relationship was responsible for not more than 10% of the errors. The estimated degree of acute psychosis and overall sum of psychopathological abnormalities as scored in a semi-quantitative exploration did not correlate with the error score, but with each other. Similarly, treatment with psychopharmaceuticals, previous misuse of drugs or of alcohol had practically no effect on the outcome of the test data. The analysis of performance and test data of schizophrenic patients indicated that our findings are most likely not due to a "non-specific" impairment of cognitive function in schizophrenia, but point to a fairly selective defect in elementary cognitive visual functions necessary for averbal social communication. Some possible explanations of the data are discussed in relation to neuropsychological and neurophysiological findings on "face-specific" cortical areas located in the primate temporal lobe.
Multiply Your Child's Success: Math and Science Can Make Dreams Come True. A Parent's Guide
ERIC Educational Resources Information Center
National Math and Science Initiative, 2012
2012-01-01
In today's high-tech world, math and science matter. Of the 10 fastest growing occupations, eight are science, math or technology-related. Whatever a child wants to do--join the military, join the workforce, or go on to college--math and science skills will be important. Become part of the equation to help one's child succeed now and in the…
NASA Astrophysics Data System (ADS)
Tero, A.; Kobayashi, R.; Nakagaki, T.
2005-06-01
Experiments on the fusion and partial separation of plasmodia of the true slime mold Physarum polycephalum are described, concentrating on the spatio-temporal phase patterns of rhythmic amoeboid movement. On the basis of these experimental results we introduce a new model of coupled oscillators with one conserved quantity. Simulations using the model equations reproduce the experimental results well.
NASA Technical Reports Server (NTRS)
Stein, Alexander
1988-01-01
A method of determining the emissivity of a hot target from a laser-based reflectance measurement which is conducted simultaneously with a measurement of the target radiance is described. Once the correct radiance and emissivity are determined, one calculates the true target temperature from these parameters via the Planck equations. The design and performance of a laser pyrometer is described. The accuracy of laser pyrometry and the effect of ambient radiance are addressed.
Relationship between population dynamics and the self-energy in driven non-equilibrium systems
Kemper, Alexander F.; Freericks, James K.
2016-05-13
We compare the decay rates of excited populations directly calculated within a Keldysh formalism to the equation of motion of the population itself for a Hubbard-Holstein model in two dimensions. While it is true that these two approaches must give the same answer, it is common to make a number of simplifying assumptions, within the differential equation for the populations, that allows one to interpret the decay in terms of hot electrons interacting with a phonon bath. Furthermore, we show how care must be taken to ensure an accurate treatment of the equation of motion for the populations due tomore » the fact that there are identities that require cancellations of terms that naively look like they contribute to the decay rates. In particular, the average time dependence of the Green's functions and self-energies plays a pivotal role in determining these decay rates.« less
NASA Astrophysics Data System (ADS)
Iturriaga, Leonelo; Massa, Eugenio
2018-01-01
In this paper, we propose a counterexample to the validity of the comparison principle and of the sub- and supersolution method for nonlocal problems like the stationary Kirchhoff equation. This counterexample shows that in general smooth bounded domains in any dimension, these properties cannot hold true if the nonlinear nonlocal term M (∥u∥ 2 ) is somewhere increasing with respect to the H01-norm of the solution. Comparing with the existing results, this fills a gap between known conditions on M that guarantee or prevent these properties and leads to a condition that is necessary and sufficient for the validity of the comparison principle. It is worth noting that equations similar to the one considered here have gained interest recently for appearing in models of thermo-convective flows of non-Newtonian fluids or of electrorheological fluids, among others.
Pappas, George; Apostolatos, Theocharis A
2014-03-28
Recently, it was shown that slowly rotating neutron stars exhibit an interesting correlation between their moment of inertia I, their quadrupole moment Q, and their tidal deformation Love number λ (the I-Love-Q relations), independently of the equation of state of the compact object. In the present Letter a similar, more general, universality is shown to hold true for all rotating neutron stars within general relativity; the first four multipole moments of the neutron star are related in a way independent of the nuclear matter equation of state we assume. By exploiting this relation, we can describe quite accurately the geometry around a neutron star with fewer parameters, even if we don't know precisely the equation of state. Furthermore, this universal behavior displayed by neutron stars could promote them to a more promising class of candidates (next to black holes) for testing theories of gravity.
Power law expansion of the early universe for a V (a) = kan potential
NASA Astrophysics Data System (ADS)
Freitas, Augusto S.
2018-01-01
In a recent paper, He, Gao and Cai [Phys. Rev. D 89, 083510 (2014)], found a rigorous proof, based on analytical solutions of the Wheeler-DeWitt (WDWE) equation, of the spontaneous creation of the universe from nothing. The solutions were obtained from a classical potential V = ka2, where a is the scale factor. In this paper, we present a complementary (to that of He, Gao and Cai) solution to the WDWE equation with V = kan. I have found an exponential expansion of the true vacuum bubble for all scenarios. In all scenarios, we found a power law behavior of the scale factor result which is in agreement with another studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ismail, Norilmi Amilia, E-mail: aenorilmi@usm.my
The motorized momentum exchange tether (MMET) is capable of generating useful velocity increments through spin–orbit coupling. This study presents a comparative study of the velocity increments between the rigid body and flexible models of MMET. The equations of motions of both models in the time domain are transformed into a function of true anomaly. The equations of motion are integrated, and the responses in terms of the velocity increment of the rigid body and flexible models are compared and analysed. Results show that the initial conditions, eccentricity, and flexibility of the tether have significant effects on the velocity increments ofmore » the tether.« less
Tracking fronts in solutions of the shallow-water equations
NASA Astrophysics Data System (ADS)
Bennett, Andrew F.; Cummins, Patrick F.
1988-02-01
A front-tracking algorithm of Chern et al. (1986) is tested on the shallow-water equations, using the Parrett and Cullen (1984) and Williams and Hori (1970) initial state, consisting of smooth finite amplitude waves depending on one space dimension alone. At high resolution the solution is almost indistinguishable from that obtained with the Glimm algorithm. The latter is known to converge to the true frontal solution, but is 20 times less efficient at the same resolution. The solutions obtained using the front-tracking algorithm at 8 times coarser resolution are quite acceptable, indicating a very substantial gain in efficiency, which encourages application in realistic ocean models possessing two or three space dimensions.
Performance evaluation of an infrared thermocouple.
Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching
2010-01-01
The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from -1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures.
NASA Astrophysics Data System (ADS)
Dvoeglazov, V. V.
2017-05-01
We present three explicit examples of generalizations in relativistic quantum mechanics. First of all, we discuss the generalized spin-1/2 equations for neutrinos. They have been obtained by means of the Gersten-Sakurai method for derivations of arbitrary-spin relativistic equations. Possible physical consequences are discussed. Next, it is easy to check that both Dirac algebraic equation {Det}(\\hat{p}-m)=0 and {Det}(\\hat{p}+m)=0 for u- and v- 4-spinors have solutions with {p}0=+/- {E}p=+/- \\sqrt{{p}2+{m}2}. The same is true for higher-spin equations. Meanwhile, every book considers the equality p0 = Ep for both u- and v- spinors of the (1/2, 0) ⊕ (0, 1/2)) representation only, thus applying the Dirac-Feynman-Stueckelberg procedure for elimination of the negative-energy solutions. The recent Ziino works (and, independently, the articles of several others) show that the Fock space can be doubled. We re-consider this possibility on the quantum field level for both S = 1/2 and higher spin particles. The third example is: we postulate the non-commutativity of 4-momenta, and we derive the mass splitting in the Dirac equation. Some applications are discussed.
Chen, I L; Chen, J T; Kuo, S R; Liang, M T
2001-03-01
Integral equation methods have been widely used to solve interior eigenproblems and exterior acoustic problems (radiation and scattering). It was recently found that the real-part boundary element method (BEM) for the interior problem results in spurious eigensolutions if the singular (UT) or the hypersingular (LM) equation is used alone. The real-part BEM results in spurious solutions for interior problems in a similar way that the singular integral equation (UT method) results in fictitious solutions for the exterior problem. To solve this problem, a Combined Helmholtz Exterior integral Equation Formulation method (CHEEF) is proposed. Based on the CHEEF method, the spurious solutions can be filtered out if additional constraints from the exterior points are chosen carefully. Finally, two examples for the eigensolutions of circular and rectangular cavities are considered. The optimum numbers and proper positions for selecting the points in the exterior domain are analytically studied. Also, numerical experiments were designed to verify the analytical results. It is worth pointing out that the nodal line of radiation mode of a circle can be rotated due to symmetry, while the nodal line of the rectangular is on a fixed position.
Errors introduced by dose scaling for relative dosimetry
Watanabe, Yoichi; Hayashi, Naoki
2012-01-01
Some dosimeters require a relationship between detector signal and delivered dose. The relationship (characteristic curve or calibration equation) usually depends on the environment under which the dosimeters are manufactured or stored. To compensate for the difference in radiation response among different batches of dosimeters, the measured dose can be scaled by normalizing the measured dose to a specific dose. Such a procedure, often called “relative dosimetry”, allows us to skip the time‐consuming production of a calibration curve for each irradiation. In this study, the magnitudes of errors due to the dose scaling procedure were evaluated by using the characteristic curves of BANG3 polymer gel dosimeter, radiographic EDR2 films, and GAFCHROMIC EBT2 films. Several sets of calibration data were obtained for each type of dosimeters, and a calibration equation of one set of data was used to estimate doses of the other dosimeters from different batches. The scaled doses were then compared with expected doses, which were obtained by using the true calibration equation specific to each batch. In general, the magnitude of errors increased with increasing deviation of the dose scaling factor from unity. Also, the errors strongly depended on the difference in the shape of the true and reference calibration curves. For example, for the BANG3 polymer gel, of which the characteristic curve can be approximated with a linear equation, the error for a batch requiring a dose scaling factor of 0.87 was larger than the errors for other batches requiring smaller magnitudes of dose scaling, or scaling factors of 0.93 or 1.02. The characteristic curves of EDR2 and EBT2 films required nonlinear equations. With those dosimeters, errors larger than 5% were commonly observed in the dose ranges of below 50% and above 150% of the normalization dose. In conclusion, the dose scaling for relative dosimetry introduces large errors in the measured doses when a large dose scaling is applied, and this procedure should be applied with special care. PACS numbers: 87.56.Da, 06.20.Dk, 06.20.fb PMID:22955658
Shackelford, S D; Wheeler, T L; Koohmaraie, M
2003-01-01
The present experiment was conducted to evaluate the ability of the U.S. Meat Animal Research Center's beef carcass image analysis system to predict calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score under commercial beef processing conditions. In two commercial beef-processing facilities, image analysis was conducted on 800 carcasses on the beef-grading chain immediately after the conventional USDA beef quality and yield grades were applied. Carcasses were blocked by plant and observed calculated yield grade. The carcasses were then separated, with 400 carcasses assigned to a calibration data set that was used to develop regression equations, and the remaining 400 carcasses assigned to a prediction data set used to validate the regression equations. Prediction equations, which included image analysis variables and hot carcass weight, accounted for 90, 88, 90, 88, and 76% of the variation in calculated yield grade, longissimus muscle area, preliminary yield grade, adjusted preliminary yield grade, and marbling score, respectively, in the prediction data set. In comparison, the official USDA yield grade as applied by online graders accounted for 73% of the variation in calculated yield grade. The technology described herein could be used by the beef industry to more accurately determine beef yield grades; however, this system does not provide an accurate enough prediction of marbling score to be used without USDA grader interaction for USDA quality grading.
Consequences of Violated Equating Assumptions under the Equivalent Groups Design
ERIC Educational Resources Information Center
Lyren, Per-Erik; Hambleton, Ronald K.
2011-01-01
The equal ability distribution assumption associated with the equivalent groups equating design was investigated in the context of a selection test for admission to higher education. The purpose was to assess the consequences for the test-takers in terms of receiving improperly high or low scores compared to their peers, and to find strong…
ERIC Educational Resources Information Center
Schweizer, Karl
2008-01-01
Structural equation modeling provides the framework for investigating experimental effects on the basis of variances and covariances in repeated measurements. A special type of confirmatory factor analysis as part of this framework enables the appropriate representation of the experimental effect and the separation of experimental and…
ERIC Educational Resources Information Center
Yavuz, Mustafa
2009-01-01
Discovering what determines students' success in the Secondary Education Institutional Exam is very important to parents and it is also critical for students, teachers, directors, and researchers. Research was carried out by studying the related literature and structural equation modeling techniques. A structural model was created that consisted…
Score Equating and Item Response Theory: Some Practical Considerations.
ERIC Educational Resources Information Center
Cook, Linda L.; Eignor, Daniel R.
The purposes of this paper are five-fold to discuss: (1) when item response theory (IRT) equating methods should provide better results than traditional methods; (2) which IRT model, the three-parameter logistic or the one-parameter logistic (Rasch), is the most reasonable to use; (3) what unique contributions IRT methods can offer the equating…
Use of Item Parceling in Structural Equation Modeling with Missing Data
ERIC Educational Resources Information Center
Orcan, Fatih
2013-01-01
Parceling is referred to as a procedure for computing sums or average scores across multiple items. Parcels instead of individual items are then used as indicators of latent factors in the structural equation modeling analysis (Bandalos 2002, 2008; Little et al., 2002; Yang, Nay, & Hoyle, 2010). Item parceling may be applied to alleviate some…
Fitting Data to Model: Structural Equation Modeling Diagnosis Using Two Scatter Plots
ERIC Educational Resources Information Center
Yuan, Ke-Hai; Hayashi, Kentaro
2010-01-01
This article introduces two simple scatter plots for model diagnosis in structural equation modeling. One plot contrasts a residual-based M-distance of the structural model with the M-distance for the factor score. It contains information on outliers, good leverage observations, bad leverage observations, and normal cases. The other plot contrasts…
Flipping an Algebra Classroom: Analyzing, Modeling, and Solving Systems of Linear Equations
ERIC Educational Resources Information Center
Kirvan, Rebecca; Rakes, Christopher R.; Zamora, Regie
2015-01-01
The present study investigated whether flipping an algebra classroom led to a stronger focus on conceptual understanding and improved learning of systems of linear equations for 54 seventh- and eighth-grade students using teacher journal data and district-mandated unit exam items. Multivariate analysis of covariance was used to compare scores on…
A Primer-Test Centered Equating Method for Setting Cut-Off Scores
ERIC Educational Resources Information Center
Zhu, Weimo; Plowman, Sharon Ann; Park, Youngsik
2010-01-01
This study evaluated the use of a new primary field test method based on test equating to address inconsistent classification among field tests. We analyzed students' information on the Progressive Aerobic Cardiovascular Endurance Run (PACER), mile run (MR), and VO[subscript 2]max from three data sets (college: n = 94; middle school: n = 39;…
Observed-Score Equating as a Test Assembly Problem.
ERIC Educational Resources Information Center
van der Linden, Wim J.; Luecht, Richard M.
1998-01-01
Derives a set of linear conditions of item-response functions that guarantees identical observed-score distributions on two test forms. The conditions can be added as constraints to a linear programming model for test assembly. An example illustrates the use of the model for an item pool from the Law School Admissions Test (LSAT). (SLD)
ERIC Educational Resources Information Center
Taylor, Zachary W.
2017-01-01
A recent Educational Testing Services report (2016) found that international graduate students with a TOEFL score of 80--the minimum average TOEFL score for graduate admission in the United States--usually possess reading subscores of 20, equating to a 12th-grade reading comprehension level. However, one public flagship university's international…
ERIC Educational Resources Information Center
Ehrlich, Stacy B.; Gwynne, Julia A.; Stitziel Pareja, Amber; Allensworth, Elaine M.; Moore, Paul; Jagesic, Sanja; Sorice, Elizabeth
2014-01-01
Significant attention is currently focused on ensuring that children are enrolled in preschool. However, regular attendance is also critically important. Children with better preschool attendance have higher kindergarten readiness scores, this is especially true for students entering with low skills. Unfortunately, many preschool-aged children are…
ERIC Educational Resources Information Center
Meyer, J. Patrick; Cash, Anne H.; Mashburn, Andrew
2011-01-01
Student-teacher interactions are dynamic relationships that change and evolve over the course of a school year. Measuring classroom quality through observations that focus on these interactions presents challenges when observations are conducted throughout the school year. Variability in observed scores could reflect true changes in the quality of…
Coping with University-Related Problems: A Cross-cultural Comparison.
ERIC Educational Resources Information Center
Essau, Cecilia Ahmoi; Trommsdorff, Gisela
1996-01-01
Compares problem- and emotion-focused coping in students from North America, Germany, and Malaysia to determine the association between coping and physical symptoms. Results with 365 undergraduates found that North Americans and Germans with higher scores on emotion-focused coping had fewer symptoms, although the reverse was true for Malaysians.…
An Illustrative Example of Propensity Score Matching with Education Research
ERIC Educational Resources Information Center
Lane, Forrest C.; To, Yen M.; Shelley, Kyna; Henson, Robin K.
2012-01-01
Researchers may be interested in examining the impact of programs that prepare youth and adults for successful careers but unable to implement experimental designs with true randomization of participants. As a result, these studies can be compromised by underlying factors that impact group selection and thus lead to potentially biased results.…
Born to Burnout: A Meta-Analytic Path Model of Personality, Job Burnout, and Work Outcomes
ERIC Educational Resources Information Center
Swider, Brian W.; Zimmerman, Ryan D.
2010-01-01
We quantitatively summarized the relationship between Five-Factor Model personality traits, job burnout dimensions (emotional exhaustion, depersonalization, and personal accomplishment), and absenteeism, turnover, and job performance. All five of the Five-Factor Model personality traits had multiple true score correlations of 0.57 with emotional…
ERIC Educational Resources Information Center
Morgan, Grant B.; Zhu, Min; Johnson, Robert L.; Hodge, Kari J.
2014-01-01
Common estimators of interrater reliability include Pearson product-moment correlation coefficients, Spearman rank-order correlations, and the generalizability coefficient. The purpose of this study was to examine the accuracy of estimators of interrater reliability when varying the true reliability, number of scale categories, and number of…
ERIC Educational Resources Information Center
Ayyad, Fatma
2011-01-01
When factorial invariance is established across translated forms of an instrument, the meaning of the construct crosses language/cultures. If factorial invariance is not established, score discrepancies may represent true language group differences or faulty translation. This study seeks to disentangle this by determining whether…
Fuzzy expert system for diagnosing diabetic neuropathy.
Rahmani Katigari, Meysam; Ayatollahi, Haleh; Malek, Mojtaba; Kamkar Haghighi, Mehran
2017-02-15
To design a fuzzy expert system to help detect and diagnose the severity of diabetic neuropathy. The research was completed in 2014 and consisted of two main phases. In the first phase, the diagnostic parameters were determined based on the literature review and by investigating specialists' perspectives ( n = 8). In the second phase, 244 medical records related to the patients who were visited in an endocrinology and metabolism research centre during the first six months of 2014 and were primarily diagnosed with diabetic neuropathy, were used to test the sensitivity, specificity, and accuracy of the fuzzy expert system. The final diagnostic parameters included the duration of diabetes, the score of a symptom examination based on the Michigan questionnaire, the score of a sign examination based on the Michigan questionnaire, the glycolysis haemoglobin level, fasting blood sugar, blood creatinine, and albuminuria. The output variable was the severity of diabetic neuropathy which was shown as a number between zero and 10, had been divided into four categories: absence of the disease, (the degree of severity) mild, moderate, and severe. The interface of the system was designed by ASP.Net (Active Server Pages Network Enabled Technology) and the system function was tested in terms of sensitivity (true positive rate) (89%), specificity (true negative rate) (98%), and accuracy (a proportion of true results, both positive and negative) (93%). The system designed in this study can help specialists and general practitioners to diagnose the disease more quickly to improve the quality of care for patients.
Fuzzy expert system for diagnosing diabetic neuropathy
Rahmani Katigari, Meysam; Ayatollahi, Haleh; Malek, Mojtaba; Kamkar Haghighi, Mehran
2017-01-01
AIM To design a fuzzy expert system to help detect and diagnose the severity of diabetic neuropathy. METHODS The research was completed in 2014 and consisted of two main phases. In the first phase, the diagnostic parameters were determined based on the literature review and by investigating specialists’ perspectives (n = 8). In the second phase, 244 medical records related to the patients who were visited in an endocrinology and metabolism research centre during the first six months of 2014 and were primarily diagnosed with diabetic neuropathy, were used to test the sensitivity, specificity, and accuracy of the fuzzy expert system. RESULTS The final diagnostic parameters included the duration of diabetes, the score of a symptom examination based on the Michigan questionnaire, the score of a sign examination based on the Michigan questionnaire, the glycolysis haemoglobin level, fasting blood sugar, blood creatinine, and albuminuria. The output variable was the severity of diabetic neuropathy which was shown as a number between zero and 10, had been divided into four categories: absence of the disease, (the degree of severity) mild, moderate, and severe. The interface of the system was designed by ASP.Net (Active Server Pages Network Enabled Technology) and the system function was tested in terms of sensitivity (true positive rate) (89%), specificity (true negative rate) (98%), and accuracy (a proportion of true results, both positive and negative) (93%). CONCLUSION The system designed in this study can help specialists and general practitioners to diagnose the disease more quickly to improve the quality of care for patients. PMID:28265346
Salvador, Renato; Pesenti, Elisa; Gobbi, Laura; Capovilla, Giovanni; Spadotto, Lorenzo; Voltarel, Guerrino; Cavallin, Francesco; Nicoletti, Loredana; Valmasoni, Michele; Ruol, Alberto; Merigliano, Stefano; Costantini, Mario
2017-01-01
The most common complication after laparoscopic Heller-Dor (LHD) is gastroesophageal reflux disease (GERD). The present study aimed (a) to analyze the true incidence of postoperative reflux by objectively assessing a large group of LHD patients and (b) to see whether the presence of typical GERD symptoms correlates with the real incidence of postoperative reflux. After LHD, patients were assessed by means of a symptom score, endoscopy, esophageal manometry, and 24-h pH monitoring. Patients were assigned to three groups: those did not accept to perform 24-h pH monitoring (group NP); those with normal postoperative pH findings (group A); and those with pathological postoperative acid exposure (group B). Four hundred sixty-three of the 806 LHD patients agreed to undergo follow-up 24-h pH monitoring. Normal pH findings were seen in 423 patients (group A, 91.4 %), while 40 (8.6 %) had a pathological acid exposure (group B). The median symptom scores were similar: 3.0 (IQR 0-8) in group A and 6.0 (IQR 0-10) in group B (p = 0.29). At endoscopy, the percentage of esophagitis was also similar (11 % in group A, 19 % in group B; p = 0.28). This study demonstrated that, after LHD was performed by experienced surgeons, the true incidence of postoperative GERD is very low. The incidence of this possible complication should be assessed by pH monitoring because endoscopic findings and symptoms may be misleading.
Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG.
Miles, Kelly; McMahon, Catherine; Boisvert, Isabelle; Ibrahim, Ronny; de Lissa, Peter; Graham, Petra; Lyxell, Björn
2017-01-01
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants' true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Magnetic Moment Quantifications of Small Spherical Objects in MRI
Cheng, Yu-Chung N.; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2014-01-01
Purpose The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Methods Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5 T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Results Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. Conclusion An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. PMID:25490517
Magnetic moment quantifications of small spherical objects in MRI.
Cheng, Yu-Chung N; Hsieh, Ching-Yi; Tackett, Ronald; Kokeny, Paul; Regmi, Rajesh Kumar; Lawes, Gavin
2015-07-01
The purpose of this work is to develop a method for accurately quantifying effective magnetic moments of spherical-like small objects from magnetic resonance imaging (MRI). A standard 3D gradient echo sequence with only one echo time is intended for our approach to measure the effective magnetic moment of a given object of interest. Our method sums over complex MR signals around the object and equates those sums to equations derived from the magnetostatic theory. With those equations, our method is able to determine the center of the object with subpixel precision. By rewriting those equations, the effective magnetic moment of the object becomes the only unknown to be solved. Each quantified effective magnetic moment has an uncertainty that is derived from the error propagation method. If the volume of the object can be measured from spin echo images, the susceptibility difference between the object and its surrounding can be further quantified from the effective magnetic moment. Numerical simulations, a variety of glass beads in phantom studies with different MR imaging parameters from a 1.5T machine, and measurements from a SQUID (superconducting quantum interference device) based magnetometer have been conducted to test the robustness of our method. Quantified effective magnetic moments and susceptibility differences from different imaging parameters and methods all agree with each other within two standard deviations of estimated uncertainties. An MRI method is developed to accurately quantify the effective magnetic moment of a given small object of interest. Most results are accurate within 10% of true values, and roughly half of the total results are accurate within 5% of true values using very reasonable imaging parameters. Our method is minimally affected by the partial volume, dephasing, and phase aliasing effects. Our next goal is to apply this method to in vivo studies. Copyright © 2015 Elsevier Inc. All rights reserved.
Camps, Vicente J; Piñero, David P; Caravaca-Arens, Esteban; de Fez, Dolores; Pérez-Cambrodí, Rafael J; Artola, Alberto
2014-09-01
The aim of this study was to obtain the exact value of the keratometric index (nkexact) and to clinically validate a variable keratometric index (nkadj) that minimizes this error. The nkexact value was determined by obtaining differences (ΔPc) between keratometric corneal power (Pk) and Gaussian corneal power ((Equation is included in full-text article.)) equal to 0. The nkexact was defined as the value associated with an equivalent difference in the magnitude of ΔPc for extreme values of posterior corneal radius (r2c) for each anterior corneal radius value (r1c). This nkadj was considered for the calculation of the adjusted corneal power (Pkadj). Values of r1c ∈ (4.2, 8.5) mm and r2c ∈ (3.1, 8.2) mm were considered. Differences of True Net Power with (Equation is included in full-text article.), Pkadj, and Pk(1.3375) were calculated in a clinical sample of 44 eyes with keratoconus. nkexact ranged from 1.3153 to 1.3396 and nkadj from 1.3190 to 1.3339 depending on the eye model analyzed. All the nkadj values adjusted perfectly to 8 linear algorithms. Differences between Pkadj and (Equation is included in full-text article.)did not exceed ±0.7 D (Diopter). Clinically, nk = 1.3375 was not valid in any case. Pkadj and True Net Power and Pk(1.3375) and Pkadj were statistically different (P < 0.01), whereas no differences were found between (Equation is included in full-text article.)and Pkadj (P > 0.01). The use of a single value of nk for the calculation of the total corneal power in keratoconus has been shown to be imprecise, leading to inaccuracies in the detection and classification of this corneal condition. Furthermore, our study shows the relevance of corneal thickness in corneal power calculations in keratoconus.
Crescendo: A Protein Sequence Database Search Engine for Tandem Mass Spectra.
Wang, Jianqi; Zhang, Yajie; Yu, Yonghao
2015-07-01
A search engine that discovers more peptides reliably is essential to the progress of the computational proteomics. We propose two new scoring functions (L- and P-scores), which aim to capture similar characteristics of a peptide-spectrum match (PSM) as Sequest and Comet do. Crescendo, introduced here, is a software program that implements these two scores for peptide identification. We applied Crescendo to test datasets and compared its performance with widely used search engines, including Mascot, Sequest, and Comet. The results indicate that Crescendo identifies a similar or larger number of peptides at various predefined false discovery rates (FDR). Importantly, it also provides a better separation between the true and decoy PSMs, warranting the future development of a companion post-processing filtering algorithm.
Hollis, Geoff
2018-04-01
Best-worst scaling is a judgment format in which participants are presented with a set of items and have to choose the superior and inferior items in the set. Best-worst scaling generates a large quantity of information per judgment because each judgment allows for inferences about the rank value of all unjudged items. This property of best-worst scaling makes it a promising judgment format for research in psychology and natural language processing concerned with estimating the semantic properties of tens of thousands of words. A variety of different scoring algorithms have been devised in the previous literature on best-worst scaling. However, due to problems of computational efficiency, these scoring algorithms cannot be applied efficiently to cases in which thousands of items need to be scored. New algorithms are presented here for converting responses from best-worst scaling into item scores for thousands of items (many-item scoring problems). These scoring algorithms are validated through simulation and empirical experiments, and considerations related to noise, the underlying distribution of true values, and trial design are identified that can affect the relative quality of the derived item scores. The newly introduced scoring algorithms consistently outperformed scoring algorithms used in the previous literature on scoring many-item best-worst data.
NASA Astrophysics Data System (ADS)
Brenner, Howard
2011-10-01
Linear irreversible thermodynamic principles are used to demonstrate, by counterexample, the existence of a fundamental incompleteness in the basic pre-constitutive mass, momentum, and energy equations governing fluid mechanics and transport phenomena in continua. The demonstration is effected by addressing the elementary case of steady-state heat conduction (and transport processes in general) occurring in quiescent fluids. The counterexample questions the universal assumption of equality of the four physically different velocities entering into the basic pre-constitutive mass, momentum, and energy conservation equations. Explicitly, it is argued that such equality is an implicit constitutive assumption rather than an established empirical fact of unquestioned authority. Such equality, if indeed true, would require formal proof of its validity, currently absent from the literature. In fact, our counterexample shows the assumption of equality to be false. As the current set of pre-constitutive conservation equations appearing in textbooks are regarded as applicable both to continua and noncontinua (e.g., rarefied gases), our elementary counterexample negating belief in the equality of all four velocities impacts on all aspects of fluid mechanics and transport processes, continua and noncontinua alike.
ERIC Educational Resources Information Center
Marcovitz, Alan B., Ed.
A particularly difficult area for many engineering students is the approximate nature of the relation between models and physical systems. This is notably true when the models consist of differential equations. An approach applied to this problem has been to use analog computers to assist in portraying the output of a model as it is progressively…
Cloud cover estimation optical package: New facility, algorithms and techniques
NASA Astrophysics Data System (ADS)
Krinitskiy, Mikhail
2017-02-01
Short- and long-wave radiation is an important component of surface heat budget over sea and land. For estimating them accurate observations of the cloud cover are needed. While massively observed visually, for building accurate parameterizations cloud cover needs also to be quantified using precise instrumental measurements. Major disadvantages of the most of existing cloud-cameras are associated with their complicated design and inaccuracy of post-processing algorithms which typically result in the uncertainties of 20% to 30% in the camera-based estimates of cloud cover. The accuracy of these types of algorithm in terms of true scoring compared to human-observed values is typically less than 10%. We developed new generation package for cloud cover estimating, which provides much more accurate results and also allows for measuring additional characteristics. New algorithm, namely SAIL GrIx, based on routine approach, also developed for this package. It uses the synthetic controlling index ("grayness rate index") which allows to suppress the background sunburn effect. This makes it possible to increase the reliability of the detection of the optically thin clouds. The accuracy of this algorithm in terms of true scoring became 30%. One more approach, namely SAIL GrIx ML, we have used to increase the cloud cover estimating accuracy is the algorithm that uses machine learning technique along with some other signal processing techniques. Sun disk condition appears to be a strong feature in this kind of models. Artificial Neural Networks type of model demonstrates the best quality. This model accuracy in terms of true scoring increases up to 95,5%. Application of a new algorithm lets us to modify the design of the optical sensing package and to avoid the use of the solar trackers. This made the design of the cloud camera much more compact. New cloud-camera has already been tested in several missions across Atlantic and Indian oceans on board of IORAS research vessels.
Temporal response improvement for computed tomography fluoroscopy
NASA Astrophysics Data System (ADS)
Hsieh, Jiang
1997-10-01
Computed tomography fluoroscopy (CTF) has attracted significant attention recently. This is mainly due to the growing clinical application of CTF in interventional procedures, such as guided biopsy. Although many studies have been conducted for its clinical efficacy, little attention has been paid to the temporal response and the inherent limitations of the CTF system. For example, during a biopsy operation, when needle is inserted at a relatively high speed, the true needle position will not be correctly depicted in the CTF image due to the time delay. This could result in an overshoot or misplacement of the biopsy needle by the operator. In this paper, we first perform a detailed analysis of the temporal response of the CTF by deriving a set of equations to describe the average location of a moving object observed by the CTF system. The accuracy of the equations is verified by computer simulations and experiments. We show that the CT reconstruction process acts as a low pass filter to the motion function. As a result, there is an inherent time delay in the CTF process to the true biopsy needle motion and locations. Based on this study, we propose a generalized underscan weighting scheme which significantly improve the performance of CTF in terms of time lag and delay.
The frequency-difference and frequency-sum acoustic-field autoproducts.
Worthmann, Brian M; Dowling, David R
2017-06-01
The frequency-difference and frequency-sum autoproducts are quadratic products of solutions of the Helmholtz equation at two different frequencies (ω + and ω - ), and may be constructed from the Fourier transform of any time-domain acoustic field. Interestingly, the autoproducts may carry wave-field information at the difference (ω + - ω - ) and sum (ω + + ω - ) frequencies even though these frequencies may not be present in the original acoustic field. This paper provides analytical and simulation results that justify and illustrate this possibility, and indicate its limitations. The analysis is based on the inhomogeneous Helmholtz equation and its solutions while the simulations are for a point source in a homogeneous half-space bounded by a perfectly reflecting surface. The analysis suggests that the autoproducts have a spatial phase structure similar to that of a true acoustic field at the difference and sum frequencies if the in-band acoustic field is a plane or spherical wave. For multi-ray-path environments, this phase structure similarity persists in portions of the autoproduct fields that are not suppressed by bandwidth averaging. Discrepancies between the bandwidth-averaged autoproducts and true out-of-band acoustic fields (with potentially modified boundary conditions) scale inversely with the product of the bandwidth and ray-path arrival time differences.
Does the Aristotle Score predict outcome in congenital heart surgery?
Kang, Nicholas; Tsang, Victor T; Elliott, Martin J; de Leval, Marc R; Cole, Timothy J
2006-06-01
The Aristotle Score has been proposed as a measure of 'complexity' in congenital heart surgery, and a tool for comparing performance amongst different centres. To date, however, it remains unvalidated. We examined whether the Basic Aristotle Score was a useful predictor of mortality following open-heart surgery, and compared it to the Risk Adjustment in Congenital Heart Surgery (RACHS-1) system. We also examined the ability of the Aristotle Score to measure performance. The Basic Aristotle Score and RACHS-1 risk categories were assigned retrospectively to 1085 operations involving cardiopulmonary bypass in children less than 18 years of age. Multiple logistic regression analysis was used to determine the significance of the Aristotle Score and RACHS-1 category as independent predictors of in-hospital mortality. Operative performance was calculated using the Aristotle equation: performance = complexity x survival. Multiple logistic regression identified RACHS-1 category to be a powerful predictor of mortality (Wald 17.7, p < 0.0001), whereas Aristotle Score was only weakly associated with mortality (Wald 4.8, p = 0.03). Age at operation and bypass time were also highly significant predictors of postoperative death (Wald 13.7 and 33.8, respectively, p < 0.0001 for both). Operative performance was measured at 7.52 units. The Basic Aristotle Score was only weakly associated with postoperative mortality in this series. Operative performance appeared to be inflated by the fact that the overall complexity of cases was relatively high in this series. An alternative equation (performance = complexity/mortality) is proposed as a fairer and more logical method of risk-adjustment.
A Simple Equation to Predict a Subscore's Value
ERIC Educational Resources Information Center
Feinberg, Richard A.; Wainer, Howard
2014-01-01
Subscores are often used to indicate test-takers' relative strengths and weaknesses and so help focus remediation. But a subscore is not worth reporting if it is too unreliable to believe or if it contains no information that is not already contained in the total score. It is possible, through the use of a simple linear equation provided in…
Bruno, Rosa Maria; Grassi, Guido; Seravalle, Gino; Savoia, Carmine; Rizzoni, Damiano; Virdis, Agostino
2018-04-23
Small-artery remodeling is an early feature of target organ damage in hypertension and retains a negative prognostic value. The aim of the study is to establish age- and sex-specific reference values for media/lumen in small arteries obtained in humans by biopsy. Data from 91 healthy individuals and 200 individuals with cardiovascular risk factors in primary prevention from 4 Italian centers were pooled. Sex-specific equations for media/lumen in the healthy subpopulation, with age as dependent variable, were calculated. These equations were used to calculate predicted media/lumen values in individuals with risk factors and Z scores. The association between classical risk factors and Z scores was then explored by multiple regression analysis. A second-degree polynomial equation model was chosen to obtain sex-specific equations for media/lumen, with age as dependent variable. In the population with risk factors (111 men, age 50.5±14.0 years, hypertension 80.5%), media/lumen Z scores were independently associated with body mass index (standardized β=0.293, P =0.0001), total cholesterol (β=0.191, P =0.031), current smoking (β=0.238, P =0.0005), fasting blood glucose (β=0.204, P =0.003), systolic blood pressure (β=0.233, P =0.023), and female sex (β=0.799, P =0.038). A significant interaction between female sex and total cholesterol was found (β=-0.979, P =0.014). Results were substantially similar in the hypertensive subgroup. A method to calculate individual values of remodeling and growth index based on reference values was also presented. Age- and sex-specific percentiles of media/lumen in a healthy population were estimated. In a predominantly hypertensive population, media/lumen Z scores were associated with major cardiovascular risk factors, including body mass index, cholesterol, smoking, glucose, and systolic blood pressure. Significant sex differences were observed. © 2018 American Heart Association, Inc.
Chan, Hiok Yang; Chen, Jerry Yongqiang; Zainul-Abidin, Suraya; Ying, Hao; Koo, Kevin; Rikhraj, Inderjeet Singh
2017-05-01
The American Orthopaedic Foot & Ankle Society (AOFAS) score is one of the most common and adapted outcome scales in hallux valgus surgery. However, AOFAS is predominantly physician based and not patient based. Although it may be straightforward to derive statistical significance, it may not equate to the true subjective benefit of the patient's experience. There is a paucity of literature defining MCID for AOFAS in hallux valgus surgery although it could have a great impact on the accuracy of analyzing surgical outcomes. Hence, the primary aim of this study was to define the Minimal Clinically Important Difference (MCID) for the AOFAS score in these patients, and the secondary aim was to correlate patients' demographics to the MCID. We conducted a retrospective cross-sectional study. A total of 446 patients were reviewed preoperatively and followed up for 2 years. An anchor question was asked 2 years postoperation: "How would you rate the overall results of your treatment for your foot and ankle condition?" (excellent, very good, good, fair, poor, terrible). The MCID was derived using 4 methods, 3 from an anchor-based approach and 1 from a distribution-based approach. Anchor-based approaches were (1) mean difference in 2-year AOFAS scores of patients who answered "good" versus "fair" based on the anchor question; (2) mean change of AOFAS score preoperatively and at 2-year follow-up in patients who answered good; (3) receiver operating characteristic (ROC) curves method, where the area under the curve (AUC) represented the likelihood that the scoring system would accurately discriminate these 2 groups of patients. The distribution-based approach used to calculate MCID was the effect size method. There were 405 (90.8%) females and 41 (9.2%) males. Mean age was 51.2 (standard deviation [SD] = 13) years, mean preoperative BMI was 24.2 (SD = 4.1). Mean preoperative AOFAS score was 55.6 (SD = 16.8), with significant improvement to 85.7 (SD = 14.4) in 2 years ( P value < .001). There were no statistical differences between demographics or preoperative AOFAS scores of patients with good versus fair satisfaction levels. At 2 years, patients who had good satisfaction had higher AOFAS scores than fair satisfaction (83.9 vs 78.1, P < .001) and higher mean change (30.2 vs 22.3, P = .015). Mean change in AOFAS score in patients with good satisfaction was 30.2 (SD = 19.8). Mean difference in good versus fair satisfaction was 7.9. Using ROC analysis, the cut-off point is 29.0, with an area under the curve (AUC) of 0.62. Effect size method derived an MCID of 8.4 with a moderate effect size of 0.5. Multiple linear regression demonstrated increasing age (β = -0.129, CI = -0.245, -0.013, P = .030) and higher preoperative AOFAS score (β = -0.874, CI = -0.644, -0.081, P < .001) to significantly decrease the amount of change in the AOFAS score. The MCID of AOFAS score in hallux valgus surgery was 7.9 to 30.2. The MCID can ensure clinical improvement from a patient's perspective and also aid in interpreting results from clinical trials and other studies. Level III, retrospective comparative series.
Test Accommodations and Equating Invariance on a Fifth-Grade Science Exam
ERIC Educational Resources Information Center
Huggins, Anne Corinne; Elbaum, Batya
2013-01-01
The purpose of this study is to utilize Score Equity Assessment (SEA) to examine measurement comparability and equity in reported scores on a statewide fifth-grade science assessment with respect to groups of students defined by disability status, English Language Learner status and use of test accommodations. Benefits of SEA include a focus on…
ERIC Educational Resources Information Center
Wu, Amery D.; Stone, Jake E.
2016-01-01
This article explores an approach for test score validation that examines test takers' strategies for taking a reading comprehension test. The authors formulated three working hypotheses about score validity pertaining to three types of test-taking strategy (comprehending meaning, test management, and test-wiseness). These hypotheses were…
Properties of the Narrative Scoring Scheme Using Narrative Retells in Young School-Age Children
ERIC Educational Resources Information Center
Heilmann, John; Miller, Jon F.; Nockerts, Ann; Dunaway, Claudia
2010-01-01
Purpose: To evaluate the clinical utility of the narrative scoring scheme (NSS) as an index of narrative macrostructure for young school-age children. Method: Oral retells of a wordless picture book were elicited from 129 typically developing children, ages 5-7. A series of correlations and hierarchical regression equations were completed using…
Demographically Adjusted Groups for Equating Test Scores. Research Report. ETS RR-14-30
ERIC Educational Resources Information Center
Livingston, Samuel A.
2014-01-01
In this study, I investigated 2 procedures intended to create test-taker groups of equal ability by poststratifying on a composite variable created from demographic information. In one procedure, the stratifying variable was the composite variable that best predicted the test score. In the other procedure, the stratifying variable was the…
ERIC Educational Resources Information Center
Moses, Tim; Oh, Hyeonjoo J.
2009-01-01
Pseudo Bayes probability estimates are weighted averages of raw and modeled probabilities; these estimates have been studied primarily in nonpsychometric contexts. The purpose of this study was to evaluate pseudo Bayes probability estimates as applied to the estimation of psychometric test score distributions and chained equipercentile equating…
Cho, Sun-Joo; Preacher, Kristopher J.; Bottge, Brian A.
2015-01-01
Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test–post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test scores that are most often used in MLM are summed item responses (or total scores). In prior research, there have been concerns regarding measurement error in the use of total scores in using MLM. To correct for measurement error in the covariate and outcome, a theoretical justification for the use of multilevel structural equation modeling (MSEM) has been established. However, MSEM for binary responses has not been widely applied to detect intervention effects (group differences) in intervention studies. In this article, the use of MSEM for intervention studies is demonstrated and the performance of MSEM is evaluated via a simulation study. Furthermore, the consequences of using MLM instead of MSEM are shown in detecting group differences. Results of the simulation study showed that MSEM performed adequately as the number of clusters, cluster size, and intraclass correlation increased and outperformed MLM for the detection of group differences. PMID:29881032
Cho, Sun-Joo; Preacher, Kristopher J; Bottge, Brian A
2015-11-01
Multilevel modeling (MLM) is frequently used to detect group differences, such as an intervention effect in a pre-test-post-test cluster-randomized design. Group differences on the post-test scores are detected by controlling for pre-test scores as a proxy variable for unobserved factors that predict future attributes. The pre-test and post-test scores that are most often used in MLM are summed item responses (or total scores). In prior research, there have been concerns regarding measurement error in the use of total scores in using MLM. To correct for measurement error in the covariate and outcome, a theoretical justification for the use of multilevel structural equation modeling (MSEM) has been established. However, MSEM for binary responses has not been widely applied to detect intervention effects (group differences) in intervention studies. In this article, the use of MSEM for intervention studies is demonstrated and the performance of MSEM is evaluated via a simulation study. Furthermore, the consequences of using MLM instead of MSEM are shown in detecting group differences. Results of the simulation study showed that MSEM performed adequately as the number of clusters, cluster size, and intraclass correlation increased and outperformed MLM for the detection of group differences.
Software for determining the true displacement of faults
NASA Astrophysics Data System (ADS)
Nieto-Fuentes, R.; Nieto-Samaniego, Á. F.; Xu, S.-S.; Alaniz-Álvarez, S. A.
2014-03-01
One of the most important parameters of faults is the true (or net) displacement, which is measured by restoring two originally adjacent points, called “piercing points”, to their original positions. This measurement is not typically applicable because it is rare to observe piercing points in natural outcrops. Much more common is the measurement of the apparent displacement of a marker. Methods to calculate the true displacement of faults using descriptive geometry, trigonometry or vector algebra are common in the literature, and most of them solve a specific situation from a large amount of possible combinations of the fault parameters. True displacements are not routinely calculated because it is a tedious and tiring task, despite their importance and the relatively simple methodology. We believe that the solution is to develop software capable of performing this work. In a previous publication, our research group proposed a method to calculate the true displacement of faults by solving most combinations of fault parameters using simple trigonometric equations. The purpose of this contribution is to present a computer program for calculating the true displacement of faults. The input data are the dip of the fault; the pitch angles of the markers, slickenlines and observation lines; and the marker separation. To prevent the common difficulties involved in switching between operative systems, the software is developed using the Java programing language. The computer program could be used as a tool in education and will also be useful for the calculation of the true fault displacement in geological and engineering works. The application resolves the cases with known direction of net slip, which commonly is assumed parallel to the slickenlines. This assumption is not always valid and must be used with caution, because the slickenlines are formed during a step of the incremental displacement on the fault surface, whereas the net slip is related to the finite slip.
Bruen, Catherine; Kreiter, Clarence; Wade, Vincent; Pawlikowska, Teresa
2017-01-01
Experience with simulated patients supports undergraduate learning of medical consultation skills. Adaptive simulations are being introduced into this environment. The authors investigate whether it can underpin valid and reliable assessment by conducting a generalizability analysis using IT data analytics from the interaction of medical students (in psychiatry) with adaptive simulations to explore the feasibility of adaptive simulations for supporting automated learning and assessment. The generalizability (G) study was focused on two clinically relevant variables: clinical decision points and communication skills. While the G study on the communication skills score yielded low levels of true score variance, the results produced by the decision points, indicating clinical decision-making and confirming user knowledge of the process of the Calgary-Cambridge model of consultation, produced reliability levels similar to what might be expected with rater-based scoring. The findings indicate that adaptive simulations have potential as a teaching and assessment tool for medical consultations.
Relationship between Calcium Score and Myocardial Scintigraphy in the Diagnosis of Coronary Disease
Siqueira, Fabio Paiva Rossini; Mesquita, Claudio Tinoco; dos Santos, Alair Augusto Sarmet M. Damas; Nacif, Marcelo Souto
2016-01-01
Half the patients with coronary artery disease present with sudden death - or acute infarction as first symptom, making early diagnosis pivotal. Myocardial perfusion scintigraphy is frequently used in the assessment of these patients, but it does not detect the disease without flow restriction, exposes the patient to high levels of radiation and is costly. On the other hand, with less radiological exposure, calcium score is directly correlated to the presence and extension of coronary atherosclerosis, and also to the risk of cardiovascular events. Even though calcium score is a tried-and-true method for stratification of asymptomatic patients, its use is still reduced in this context, since current guidelines are contradictory to its use on symptomatic diseases. The aim of this review is to identify, on patients under investigation for coronary artery disease, the main evidence of the use of calcium score associated with functional evaluation and scintigraphy. PMID:27437867
Rice, J P; Saccone, N L; Corbett, J
2001-01-01
The lod score method originated in a seminal article by Newton Morton in 1955. The method is broadly concerned with issues of power and the posterior probability of linkage, ensuring that a reported linkage has a high probability of being a true linkage. In addition, the method is sequential, so that pedigrees or lod curves may be combined from published reports to pool data for analysis. This approach has been remarkably successful for 50 years in identifying disease genes for Mendelian disorders. After discussing these issues, we consider the situation for complex disorders, where the maximum lod score (MLS) statistic shares some of the advantages of the traditional lod score approach but is limited by unknown power and the lack of sharing of the primary data needed to optimally combine analytic results. We may still learn from the lod score method as we explore new methods in molecular biology and genetic analysis to utilize the complete human DNA sequence and the cataloging of all human genes.
Equations for Scoring Rules When Data Are Missing
NASA Technical Reports Server (NTRS)
James, Mark
2006-01-01
A document presents equations for scoring rules in a diagnostic and/or prognostic artificial-intelligence software system of the rule-based inference-engine type. The equations define a set of metrics that characterize the evaluation of a rule when data required for the antecedence clause(s) of the rule are missing. The metrics include a primary measure denoted the rule completeness metric (RCM) plus a number of subsidiary measures that contribute to the RCM. The RCM is derived from an analysis of a rule with respect to its truth and a measure of the completeness of its input data. The derivation is such that the truth value of an antecedent is independent of the measure of its completeness. The RCM can be used to compare the degree of completeness of two or more rules with respect to a given set of data. Hence, the RCM can be used as a guide to choosing among rules during the rule-selection phase of operation of the artificial-intelligence system..
Evaluating Bias of Sequential Mixed-Mode Designs against Benchmark Surveys
ERIC Educational Resources Information Center
Klausch, Thomas; Schouten, Barry; Hox, Joop J.
2017-01-01
This study evaluated three types of bias--total, measurement, and selection bias (SB)--in three sequential mixed-mode designs of the Dutch Crime Victimization Survey: telephone, mail, and web, where nonrespondents were followed up face-to-face (F2F). In the absence of true scores, all biases were estimated as mode effects against two different…
ERIC Educational Resources Information Center
Chapman, Michael; McBride, Michelle L.
1992-01-01
Children of 4 to 10 years of age were given 2 class inclusion tasks. Younger children's performance was inflated by guessing. Scores were higher in the marked task than in the unmarked task as a result of differing rates of inclusion logic. Children's verbal justifications closely approximated estimates of their true competence. (GLR)
USDA-ARS?s Scientific Manuscript database
Physical activity (PA) protects against coronary heart disease (CHD) by favorably altering several CHD risk factors. In order to best understand the true nature of the relationship between PA and CHD, the impact different PA assessment methods have on the relationships must first be clarified. The p...
Japan's Teachers Earn Tenure on Day One
ERIC Educational Resources Information Center
Ahn, Ruth; Asanuma, Shigeru; Mori, Hisayoshi
2016-01-01
Teachers in Japan earn tenure on their first day of employment--not after two years of experience based on evaluations of teaching performance or student test scores. This is almost too good to be true. If tenure is so easy to attain, how do the Japanese make sure their teachers, especially novice teachers hired with little teaching experience,…
Type I Error Inflation for Detecting DIF in the Presence of Impact
ERIC Educational Resources Information Center
DeMars, Christine E.
2010-01-01
In this brief explication, two challenges for using differential item functioning (DIF) measures when there are large group differences in true proficiency are illustrated. Each of these difficulties may lead to inflated Type I error rates, for very different reasons. One problem is that groups matched on observed score are not necessarily well…
Helping Students Prepare for Qualifying Exams; A Summary of WCRA Institute III.
ERIC Educational Resources Information Center
Parmer, Lorraine
This paper describes several learning laboratory program approaches to teaching students how to prepare for professional school admission exams. That these exams are true aptitude tests is a myth repeatedly deflated when students study for the tests and manage to score significantly higher on a second testing. Factors in addition to intelligence…
An Approach to Biased Item Identification Using Latent Trait Measurement Theory.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
Because it is a true score model employing item parameters which are independent of the examined sample, item characteristic curve theory (ICC) offers several advantages over classical measurement theory. In this paper an approach to biased item identification using ICC theory is described and applied. The ICC theory approach is attractive in that…
Validity of a Jump Mat for assessing Countermovement Jump Performance in Elite Rugby Players.
Dobbin, Nick; Hunwicks, Richard; Highton, Jamie; Twist, Craig
2017-02-01
This study determined the validity of the Just Jump System ® (JJS) for measuring flight time, jump height and peak power output (PPO) in elite rugby league players. 37 elite rugby league players performed 6 countermovement jumps (CMJ; 3 with and 3 without arms) on a jump mat and force platform. A sub-sample (n=28) was used to cross-validate the equations for flight time, jump height and PPO. The JJS systematically overestimated flight time and jump height compared to the force platform (P<0.05), but demonstrated strong associations for flight time ( with R 2 =0.938; without R 2 =0.972) and jump height ( with R 2 =0.945; without R 2 =0.987). Our equations revealed no systematic difference between corrected and force platform scores and an improved the agreement for flight time (Ratio limits of agreement: with 1.00 vs. 1.36; without 1.00 vs. 1.16) and jump height ( with 1.01 vs. 1.34; without 1.01 vs. 1.15), meaning that our equations can be used to correct JJS scores for elite rugby players. While our equation improved the estimation of PPO ( with 1.02; without 1.01) compared to existing equations (Harman: 1.20; Sayers: 1.04), this only accounted for 64 and 69% of PPO. © Georg Thieme Verlag KG Stuttgart · New York.
Parallel But Not Equivalent: Challenges and Solutions for Repeated Assessment of Cognition over Time
Gross, Alden L.; Inouye, Sharon K.; Rebok, George W.; Brandt, Jason; Crane, Paul K.; Parisi, Jeanine M.; Tommet, Doug; Bandeen-Roche, Karen; Carlson, Michelle C.; Jones, Richard N.
2013-01-01
Objective Analyses of individual differences in change may be unintentionally biased when versions of a neuropsychological test used at different follow-ups are not of equivalent difficulty. This study’s objective was to compare mean, linear, and equipercentile equating methods and demonstrate their utility in longitudinal research. Study Design and Setting The Advanced Cognitive Training for Independent and Vital Elderly (ACTIVE, N=1,401) study is a longitudinal randomized trial of cognitive training. The Alzheimer’s Disease Neuroimaging Initiative (ADNI, n=819) is an observational cohort study. Nonequivalent alternate versions of the Auditory Verbal Learning Test (AVLT) were administered in both studies. Results Using visual displays, raw and mean-equated AVLT scores in both studies showed obvious nonlinear trajectories in reference groups that should show minimal change, poor equivalence over time (ps≤0.001), and raw scores demonstrated poor fits in models of within-person change (RMSEAs>0.12). Linear and equipercentile equating produced more similar means in reference groups (ps≥0.09) and performed better in growth models (RMSEAs<0.05). Conclusion Equipercentile equating is the preferred equating method because it accommodates tests more difficult than a reference test at different percentiles of performance and performs well in models of within-person trajectory. The method has broad applications in both clinical and research settings to enhance the ability to use nonequivalent test forms. PMID:22540849
1990-02-01
Equipercentile equating was accomplished by obtaining the score distributions for the experimental and 8a subtests and defining scores that cut off ...rqton eadauarten seitoces i. oct orato lt gntOwatvo Ogeifetjo, WWd ftpaeb I~t a Ilme Oev" mafqway. 5.1.te 1 04. .Arinqton JA 222024102 AMd to IPA, Off ...of current applicants to previous applicants and to provide a consistent meaning for the cutting scores used in selection and classification of
Gambling scores for earthquake predictions and forecasts
NASA Astrophysics Data System (ADS)
Zhuang, Jiancang
2010-04-01
This paper presents a new method, namely the gambling score, for scoring the performance earthquake forecasts or predictions. Unlike most other scoring procedures that require a regular scheme of forecast and treat each earthquake equally, regardless their magnitude, this new scoring method compensates the risk that the forecaster has taken. Starting with a certain number of reputation points, once a forecaster makes a prediction or forecast, he is assumed to have betted some points of his reputation. The reference model, which plays the role of the house, determines how many reputation points the forecaster can gain if he succeeds, according to a fair rule, and also takes away the reputation points betted by the forecaster if he loses. This method is also extended to the continuous case of point process models, where the reputation points betted by the forecaster become a continuous mass on the space-time-magnitude range of interest. We also calculate the upper bound of the gambling score when the true model is a renewal process, the stress release model or the ETAS model and when the reference model is the Poisson model.
Coon, Scott A; Stevens, Vanessa W; Brown, Jack E; Wolff, Stephen E; Wrobel, Mark J
2015-01-01
To determine pharmacists' and health food store employees' knowledge about the safety and efficacy of common, nonvitamin, nonmineral dietary supplements in a retail setting and confidence in discussing, recommending, and acquiring knowledge about complementary and alternative medicine (CAM). Cross-sectional survey. Central and western New York in May and June 2012. Knowledge and confidence survey scores based on true/false and Likert scale responses. Pharmacists' mean knowledge score was significantly higher than that of health food store employees (8.42 vs. 6.15 items of 15 total knowledge questions). Adjusting for differences in experience, education, occupation, and confidence, knowledge scores were significantly higher for pharmacists and those with a higher total confidence score. Pharmacists were significantly less confident about the safety and efficacy of CAM comparatively (13 vs. 16 items of 20 total questions). Pharmacists scored significantly higher than health food store employees on a survey assessing knowledge of dietary supplements' safety and efficacy. Despite the significant difference, scores were unacceptably low for pharmacists, highlighting a knowledge deficit in subject matter.
Thibodeau, Michel A; Leonard, Rachel C; Abramowitz, Jonathan S; Riemann, Bradley C
2015-12-01
The Dimensional Obsessive-Compulsive Scale (DOCS) is a promising measure of obsessive-compulsive disorder (OCD) symptoms but has received minimal psychometric attention. We evaluated the utility and reliability of DOCS scores. The study included 832 students and 300 patients with OCD. Confirmatory factor analysis supported the originally proposed four-factor structure. DOCS total and subscale scores exhibited good to excellent internal consistency in both samples (α = .82 to α = .96). Patient DOCS total scores reduced substantially during treatment (t = 16.01, d = 1.02). DOCS total scores discriminated between students and patients (sensitivity = 0.76, 1 - specificity = 0.23). The measure did not exhibit gender-based differential item functioning as tested by Mantel-Haenszel chi-square tests. Expected response options for each item were plotted as a function of item response theory and demonstrated that DOCS scores incrementally discriminate OCD symptoms ranging from low to extremely high severity. Incremental differences in DOCS scores appear to represent unbiased and reliable differences in true OCD symptom severity. © The Author(s) 2014.
NASA Astrophysics Data System (ADS)
Okawa, Shinpei; Hirasawa, Takeshi; Kushibiki, Toshihiro; Ishihara, Miya
2017-12-01
Quantitative photoacoustic tomography (QPAT) employing a light propagation model will play an important role in medical diagnoses by quantifying the concentration of hemoglobin or a contrast agent. However, QPAT by the light propagation model with the three-dimensional (3D) radiative transfer equation (RTE) requires a huge computational load in the iterative forward calculations involved in the updating process to reconstruct the absorption coefficient. The approximations of the light propagation improve the efficiency of the image reconstruction for the QPAT. In this study, we compared the 3D/two-dimensional (2D) photon diffusion equation (PDE) approximating 3D RTE with the Monte Carlo simulation based on 3D RTE. Then, the errors in a 2D PDE-based linearized image reconstruction caused by the approximations were quantitatively demonstrated and discussed in the numerical simulations. It was clearly observed that the approximations affected the reconstructed absorption coefficient. The 2D PDE-based linearized algorithm succeeded in the image reconstruction of the region with a large absorption coefficient in the 3D phantom. The value reconstructed in the phantom experiment agreed with that in the numerical simulation, so that it was validated that the numerical simulation of the image reconstruction predicted the relationship between the true absorption coefficient of the target in the 3D medium and the reconstructed value with the 2D PDE-based linearized algorithm. Moreover, the the true absorption coefficient in 3D medium was estimated from the 2D reconstructed image on the basis of the prediction by the numerical simulation. The estimation was successful in the phantom experiment, although some limitations were revealed.
NASA Astrophysics Data System (ADS)
Ji, Cheng; Wang, Zilin; Wu, Chenhui; Zhu, Miaoyong
2018-04-01
According to the calculation results of a 3D thermomechanical-coupled finite-element (FE) model of GCr15 bearing steel bloom during a heavy reduction (HR) process, the variation ranges in the strain rate and strain under HR were described. In addition, the hot deformation behavior of the GCr15 bearing steel was studied over the temperature range from 1023 K to 1573 K (750 °C to 1300 °C) with strain rates of 0.001, 0.01, and 0.1 s-1 in single-pass thermosimulation compression experiments. To ensure the accuracy of the constitutive model, the temperature range was divided into two temperature intervals according to the fully austenitic temperature of GCr15 steel [1173 K (900 °C)]. Two sets of material parameters for the constitutive model were derived based on the true stress-strain curves of the two temperature intervals. A flow stress constitutive model was established using a revised Arrhenius-type constitutive equation, which considers the relationships among the material parameters and true strain. This equation describes dynamic softening during hot compression processes. Considering the effect of glide and climb on the deformation mechanism, the Arrhenius-type constitutive equation was modified by a physically based approach. This model is the most accurate over the temperatures ranging from 1173 K to 1573 K (900 °C to 1300 °C) under HR deformation conditions (ignoring the range from 1273 K to 1573 K (1000 °C to 1300 °C) with a strain rate of 0.1 s-1). To ensure the convergence of the FE calculation, an approximated method was used to estimate the flow stress at temperatures greater than 1573 K (1300 °C).
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 11 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODEs) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDEs.
NASA Technical Reports Server (NTRS)
Yee, H. C.; Sweby, P. K.
1995-01-01
The global asymptotic nonlinear behavior of 1 1 explicit and implicit time discretizations for four 2 x 2 systems of first-order autonomous nonlinear ordinary differential equations (ODES) is analyzed. The objectives are to gain a basic understanding of the difference in the dynamics of numerics between the scalars and systems of nonlinear autonomous ODEs and to set a baseline global asymptotic solution behavior of these schemes for practical computations in computational fluid dynamics. We show how 'numerical' basins of attraction can complement the bifurcation diagrams in gaining more detailed global asymptotic behavior of time discretizations for nonlinear differential equations (DEs). We show how in the presence of spurious asymptotes the basins of the true stable steady states can be segmented by the basins of the spurious stable and unstable asymptotes. One major consequence of this phenomenon which is not commonly known is that this spurious behavior can result in a dramatic distortion and, in most cases, a dramatic shrinkage and segmentation of the basin of attraction of the true solution for finite time steps. Such distortion, shrinkage and segmentation of the numerical basins of attraction will occur regardless of the stability of the spurious asymptotes, and will occur for unconditionally stable implicit linear multistep methods. In other words, for the same (common) steady-state solution the associated basin of attraction of the DE might be very different from the discretized counterparts and the numerical basin of attraction can be very different from numerical method to numerical method. The results can be used as an explanation for possible causes of error, and slow convergence and nonconvergence of steady-state numerical solutions when using the time-dependent approach for nonlinear hyperbolic or parabolic PDES.
Performance Evaluation of an Infrared Thermocouple
Chen, Chiachung; Weng, Yu-Kai; Shen, Te-Ching
2010-01-01
The measurement of the leaf temperature of forests or agricultural plants is an important technique for the monitoring of the physiological state of crops. The infrared thermometer is a convenient device due to its fast response and nondestructive measurement technique. Nowadays, a novel infrared thermocouple, developed with the same measurement principle of the infrared thermometer but using a different detector, has been commercialized for non-contact temperature measurement. The performances of two-kinds of infrared thermocouples were evaluated in this study. The standard temperature was maintained by a temperature calibrator and a special black cavity device. The results indicated that both types of infrared thermocouples had good precision. The error distribution ranged from −1.8 °C to 18 °C as the reading values served as the true values. Within the range from 13 °C to 37 °C, the adequate calibration equations were the high-order polynomial equations. Within the narrower range from 20 °C to 35 °C, the adequate equation was a linear equation for one sensor and a two-order polynomial equation for the other sensor. The accuracy of the two kinds of infrared thermocouple was improved by nearly 0.4 °C with the calibration equations. These devices could serve as mobile monitoring tools for in situ and real time routine estimation of leaf temperatures. PMID:22163458
Solution of underdetermined systems of equations with gridded a priori constraints.
Stiros, Stathis C; Saltogianni, Vasso
2014-01-01
The TOPINV, Topological Inversion algorithm (or TGS, Topological Grid Search) initially developed for the inversion of highly non-linear redundant systems of equations, can solve a wide range of underdetermined systems of non-linear equations. This approach is a generalization of a previous conclusion that this algorithm can be used for the solution of certain integer ambiguity problems in Geodesy. The overall approach is based on additional (a priori) information for the unknown variables. In the past, such information was used either to linearize equations around approximate solutions, or to expand systems of observation equations solved on the basis of generalized inverses. In the proposed algorithm, the a priori additional information is used in a third way, as topological constraints to the unknown n variables, leading to an R(n) grid containing an approximation of the real solution. The TOPINV algorithm does not focus on point-solutions, but exploits the structural and topological constraints in each system of underdetermined equations in order to identify an optimal closed space in the R(n) containing the real solution. The centre of gravity of the grid points defining this space corresponds to global, minimum-norm solutions. The rationale and validity of the overall approach are demonstrated on the basis of examples and case studies, including fault modelling, in comparison with SVD solutions and true (reference) values, in an accuracy-oriented approach.
Verochana, Karune; Prapayasatok, Sangsom; Mahasantipiya, Phattaranant May; Korwanich, Narumanas
2016-01-01
Purpose This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. Materials and Methods The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Results Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. Conclusion The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age. PMID:27051633
Verochana, Karune; Prapayasatok, Sangsom; Janhom, Apirum; Mahasantipiya, Phattaranant May; Korwanich, Narumanas
2016-03-01
This study assessed the accuracy of age estimates produced by a regression equation derived from lower third molar development in a Thai population. The first part of this study relied on measurements taken from panoramic radiographs of 614 Thai patients aged from 9 to 20. The stage of lower left and right third molar development was observed in each radiograph and a modified Gat score was assigned. Linear regression on this data produced the following equation: Y=9.309+1.673 mG+0.303S (Y=age; mG=modified Gat score; S=sex). In the second part of this study, the predictive accuracy of this equation was evaluated using data from a second set of panoramic radiographs (539 Thai subjects, 9 to 24 years old). Each subject's age was estimated using the above equation and compared against age calculated from a provided date of birth. Estimated and known age data were analyzed using the Pearson correlation coefficient and descriptive statistics. Ages estimated from lower left and lower right third molar development stage were significantly correlated with the known ages (r=0.818, 0.808, respectively, P≤0.01). 50% of age estimates in the second part of the study fell within a range of error of ±1 year, while 75% fell within a range of error of ±2 years. The study found that the equation tends to estimate age accurately when individuals are 9 to 20 years of age. The equation can be used for age estimation for Thai populations when the individuals are 9 to 20 years of age.
Approximate effective nonlinear coefficient of second-harmonic generation in KTiOPO(4).
Asaumi, K
1993-10-20
A simplified approximate expression for the effective nonlinear coefficient of type-II second-harmonicgeneration in KTiOPO(4) was obtained by observing that the difference between the refractive indices n(x) and n(y) is 1 order of magnitude smaller than the difference between n(z) and n(y) (or n(x)). The agreement of this approximate equation with the true definition is good, with a maximum discrepancy of 4%.
True covariance simulation of the EUVE update filter
NASA Technical Reports Server (NTRS)
Bar-Itzhack, Itzhack Y.; Harman, R. R.
1989-01-01
A covariance analysis of the performance and sensitivity of the attitude determination Extended Kalman Filter (EKF) used by the On Board Computer (OBC) of the Extreme Ultra Violet Explorer (EUVE) spacecraft is presented. The linearized dynamics and measurement equations of the error states are derived which constitute the truth model describing the real behavior of the systems involved. The design model used by the OBC EKF is then obtained by reducing the order of the truth model. The covariance matrix of the EKF which uses the reduced order model is not the correct covariance of the EKF estimation error. A true covariance analysis has to be carried out in order to evaluate the correct accuracy of the OBC generated estimates. The results of such analysis are presented which indicate both the performance and the sensitivity of the OBC EKF.
Development of a new score to estimate clinical East Coast Fever in experimentally infected cattle.
Schetters, Th P M; Arts, G; Niessen, R; Schaap, D
2010-02-10
East Coast Fever is a tick-transmitted disease in cattle caused by Theileria parva protozoan parasites. Quantification of the clinical disease can be done by determining a number of variables, derived from parasitological, haematological and rectal temperature measurements as described by Rowlands et al. (2000). From a total of 13 parameters a single ECF-score is calculated that allows categorization of infected cattle in five different classes that correlate with the severity of clinical signs. This score is complicated not only by the fact that it requires estimation of 13 parameters but also because of the subsequent mathematics. The fact that the values are normalised over a range of 0-10 for each experiment makes it impossible to compare results from different experiments. Here we present an alternative score based on the packed cell volume and the number of piroplasms in the circulation and that is calculated using a simple equation; ECF-score=PCV(relday0)/log(PE+10). In this equation the packed cell volume is expressed as a value relative to that of the day on infection (PCV(relday0)) and the number of piroplasms is expressed as the logarithmic value of the number of infected red blood cells (=PE) in a total of 1000 red blood cells. To allow PE to be 0, +10 is added in the denominator. We analysed a data set of 54 cattle from a previous experiment and found a statistically significant linear correlation between the ECF-score value reached during the post-infection period and the Rowlands' score value. The new score is much more practical than the Rowlands score as it only requires daily blood sampling. From these blood samples both PCV and number of piroplasms can be determined, and the score can be calculated daily. This allows monitoring the development of ECF after infection, which was hitherto not possible. In addition, the new score allows for easy comparison of results from different experiments.
NASA Astrophysics Data System (ADS)
Bonacci, Ognjen; Željković, Ivana
2018-01-01
Different countries use varied methods for daily mean temperature calculation. None of them assesses precisely the true daily mean temperature, which is defined as the integral of continuous temperature measurements in a day. Of special scientific as well as practical importance is to find out how temperatures calculated by different methods and approaches deviate from the true daily mean temperature. Five mean daily temperatures were calculated (T0, T1, T2, T3, T4) using five different equations. The mean of 24-h temperature observations during the calendar day is accepted to represent the true, daily mean T0. The differences Δ i between T0 and four other mean daily temperatures T1, T2, T3, and T4 were calculated and analysed. In the paper, analyses were done with hourly data measured in a period from 1 January 1999 to 31 December 2014 (149,016 h, 192 months and 16 years) at three Croatian meteorological stations. The stations are situated in distinct climatological areas: Zagreb Grič in a mild climate, Zavižan in the cold mountain region and Dubrovnik in the hot Mediterranean. Influence of fog on the temperature is analysed. Special attention is given to analyses of extreme (maximum and minimum) daily differences occurred at three analysed stations. Selection of the fixed local hours, which is in use for calculation of mean daily temperature, plays a crucial role in diminishing of bias from the true daily temperature.
The use of an essay examination in evaluating medical students during the surgical clerkship.
Smart, Blair J; Rinewalt, Daniel; Daly, Shaun C; Janssen, Imke; Luu, Minh B; Myers, Jonathan A
2016-01-01
Third-year medical students are graded according to subjective performance evaluations and standardized tests written by the National Board of Medical Examiners (NBME). Many "poor" standardized test takers believe the heavily weighted NBME does not evaluate their true fund of knowledge and would prefer a more open-ended forum to display their individualized learning experiences. Our study examined the use of an essay examination as part of the surgical clerkship evaluation. We retrospectively examined the final surgical clerkship grades of 781 consecutive medical students enrolled in a large urban academic medical center from 2005 to 2011. We examined final grades with and without the inclusion of the essay examination for all students using a paired t test and then sought any relationship between the essay and NBME using Pearson correlations. Final average with and without the essay examination was 72.2% vs 71.3% (P < .001), with the essay examination increasing average scores by .4, 1.8, and 2.5 for those receiving high pass, pass, and fail, respectively. The essay decreased the average score for those earning an honors by .4. Essay scores were found to overall positively correlate with the NBME (r = .32, P < .001). The inclusion of an essay examination as part of the third-year surgical core clerkship final did increase the final grade a modest degree, especially for those with lower scores who may identify themselves as "poor" standardized test takers. A more open-ended forum may allow these students an opportunity to overcome this deficiency and reveal their true fund of surgical knowledge. Copyright © 2016 Elsevier Inc. All rights reserved.
ERIC Educational Resources Information Center
Andrews, Benjamin James
2011-01-01
The equity properties can be used to assess the quality of an equating. The degree to which expected scores conditional on ability are similar between test forms is referred to as first-order equity. Second-order equity is the degree to which conditional standard errors of measurement are similar between test forms after equating. The purpose of…
ERIC Educational Resources Information Center
Duong, Minh Quang
2011-01-01
Testing programs often use multiple test forms of the same test to control item exposure and to ensure test security. Although test forms are constructed to be as similar as possible, they often differ. Test equating techniques are those statistical methods used to adjust scores obtained on different test forms of the same test so that they are…
Anestis, Joye C; Finn, Jacob A; Gottfried, Emily; Arbisi, Paul A; Joiner, Thomas E
2015-06-01
This study examined the utility of the Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF) Validity Scales in prediction of premature termination in a sample of 511 individuals seeking services from a university-based psychology clinic. Higher scores on True Response Inconsistency-Revised and Infrequent Psychopathology Responses increased the risk of premature termination, whereas higher scores on Adjustment Validity lowered the risk of premature termination. Additionally, when compared with individuals who did not prematurely terminate, individuals who prematurely terminated treatment had lower Global Assessment of Functioning scores at both intake and termination and made fewer improvements. Implications of these findings for the use of the MMPI-2-RF Validity Scales in promoting treatment compliance are discussed. © The Author(s) 2014.
Flow Behavior and Constitutive Equation of Ti-6.5Al-2Sn-4Zr-4Mo-1W-0.2Si Titanium Alloy
NASA Astrophysics Data System (ADS)
Yang, Xuemei; Guo, Hongzhen; Liang, Houquan; Yao, Zekun; Yuan, Shichong
2016-04-01
In order to get a reliable constitutive equation for the finite element simulation, flow behavior of Ti-6.5Al-2Sn-4Zr-4Mo-1W-0.2Si alloy under high temperature was investigated by carrying a series of isothermal compression tests at temperatures of 1153-1293 K and strain rates of 0.01-10.0 s-1 on the Gleeble-1500 simulator. Results showed that the true stress-strain curves exhibited peaks at small strains, after which the flow stress decreased monotonically. Ultimately, the flow curves reached steady state at the strain of 0.6, showing a dynamic flow softening phenomenon. The effects of strain rate, temperature, and strain on the flow behavior were researched by establishing a constitutive equation. The relations among stress exponent, deformation activation energy, and strain were preliminarily discussed by using strain rate sensitivity exponent and dynamic recrystallization kinetics curve. Stress values predicted by the modified constitutive equation showed a good agreement with the experimental ones. The correlation coefficient ( R) and average absolute relative error (AARE) were 98.2% and 4.88%, respectively, which confirmed that the modified constitutive equation could give an accurate estimation of the flow stress for BT25y titanium alloy.
Variational principle for the Navier-Stokes equations.
Kerswell, R R
1999-05-01
A variational principle is presented for the Navier-Stokes equations in the case of a contained boundary-driven, homogeneous, incompressible, viscous fluid. Based upon making the fluid's total viscous dissipation over a given time interval stationary subject to the constraint of the Navier-Stokes equations, the variational problem looks overconstrained and intractable. However, introducing a nonunique velocity decomposition, u(x,t)=phi(x,t) + nu(x,t), "opens up" the variational problem so that what is presumed a single allowable point over the velocity domain u corresponding to the unique solution of the Navier-Stokes equations becomes a surface with a saddle point over the extended domain (phi,nu). Complementary or dual variational problems can then be constructed to estimate this saddle point value strictly from above as part of a minimization process or below via a maximization procedure. One of these reduced variational principles is the natural and ultimate generalization of the upper bounding problem developed by Doering and Constantin. The other corresponds to the ultimate Busse problem which now acts to lower bound the true dissipation. Crucially, these reduced variational problems require only the solution of a series of linear problems to produce bounds even though their unique intersection is conjectured to correspond to a solution of the nonlinear Navier-Stokes equations.
Item Selection and Pre-equating with Empirical Item Characteristic Curves.
ERIC Educational Resources Information Center
Livingston, Samuel A.
An empirical item characteristic curve shows the probability of a correct response as a function of the student's total test score. These curves can be estimated from large-scale pretest data. They enable test developers to select items that discriminate well in the score region where decisions are made. A similar set of curves can be used to…
Nonlinear Dynamics, Artificial Cognition and Galactic Export
NASA Astrophysics Data System (ADS)
Rössler, Otto E.
2004-08-01
The field of nonlinear dynamics focuses on function rather than structure. Evolution and brain function are examples. An equation for a brain, described in 1973, is explained. Then, a principle of interactional function change between two coupled equations of this type is described. However, all of this is not done in an abstract manner but in close contact with the meaning of these equations in a biological context. Ethological motivation theory and Batesonian interaction theory are reencountered. So is a fairly unknown finding by van Hooff on the indistinguishability of smile and laughter in a single primate species. Personhood and evil, two human characteristics, are described abstractly. Therapies and the question of whether it is ethically allowed to export benevolence are discussed. The whole dynamic approach is couched in terms of the Cartesian narrative, invented in the 17th century and later called Enlightenment. Whether or not it is true that a "second Enlightenment" is around the corner is the main question raised in the present paper.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Waltz, J., E-mail: jwaltz@lanl.gov; Canfield, T.R.; Morgan, N.R.
2014-06-15
We present a set of manufactured solutions for the three-dimensional (3D) Euler equations. The purpose of these solutions is to allow for code verification against true 3D flows with physical relevance, as opposed to 3D simulations of lower-dimensional problems or manufactured solutions that lack physical relevance. Of particular interest are solutions with relevance to Inertial Confinement Fusion (ICF) capsules. While ICF capsules are designed for spherical symmetry, they are hypothesized to become highly 3D at late time due to phenomena such as Rayleigh–Taylor instability, drive asymmetry, and vortex decay. ICF capsules also involve highly nonlinear coupling between the fluid dynamicsmore » and other physics, such as radiation transport and thermonuclear fusion. The manufactured solutions we present are specifically designed to test the terms and couplings in the Euler equations that are relevant to these phenomena. Example numerical results generated with a 3D Finite Element hydrodynamics code are presented, including mesh convergence studies.« less
'Constraint consistency' at all orders in cosmological perturbation theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nandi, Debottam; Shankaranarayanan, S., E-mail: debottam@iisertvm.ac.in, E-mail: shanki@iisertvm.ac.in
2015-08-01
We study the equivalence of two—order-by-order Einstein's equation and Reduced action—approaches to cosmological perturbation theory at all orders for different models of inflation. We point out a crucial consistency check which we refer to as 'Constraint consistency' condition that needs to be satisfied in order for the two approaches to lead to identical single variable equation of motion. The method we propose here is quick and efficient to check the consistency for any model including modified gravity models. Our analysis points out an important feature which is crucial for inflationary model building i.e., all 'constraint' inconsistent models have higher ordermore » Ostrogradsky's instabilities but the reverse is not true. In other words, one can have models with constraint Lapse function and Shift vector, though it may have Ostrogradsky's instabilities. We also obtain single variable equation for non-canonical scalar field in the limit of power-law inflation for the second-order perturbed variables.« less
Dynamic Recrystallization Behavior of AISI 422 Stainless Steel During Hot Deformation Processes
NASA Astrophysics Data System (ADS)
Ahmadabadi, R. Mohammadi; Naderi, M.; Mohandesi, J. Aghazadeh; Cabrera, Jose Maria
2018-02-01
In this work, hot compression tests were performed to investigate the dynamic recrystallization (DRX) process of a martensitic stainless steel (AISI 422) at temperatures of 950, 1000, 1050, 1100 and 1150 °C and strain rates of 0.01, 0.1 and 1 s-1. The dependency of strain-hardening rate on flow stress was used to estimate the critical stress for the onset of DRX. Accordingly, the critical stress to peak stress ratio was calculated as 0.84. Moreover, the effect of true strain was examined by fitting stress values to an Arrhenius type constitutive equation, and then considering material constants as a function of strain by using a third-order polynomial equation. Finally, two constitutive models were used to investigate the competency of the strain-dependent constitutive equations to predict the flow stress curves of the studied steel. It was concluded that one model offers better precision on the flow stress values after the peak stress, while the other model gives more accurate results before the peak stress.
Distribution theory for Schrödinger’s integral equation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lange, Rutger-Jan, E-mail: rutger-jan.lange@cantab.net
2015-12-15
Much of the literature on point interactions in quantum mechanics has focused on the differential form of Schrödinger’s equation. This paper, in contrast, investigates the integral form of Schrödinger’s equation. While both forms are known to be equivalent for smooth potentials, this is not true for distributional potentials. Here, we assume that the potential is given by a distribution defined on the space of discontinuous test functions. First, by using Schrödinger’s integral equation, we confirm a seminal result by Kurasov, which was originally obtained in the context of Schrödinger’s differential equation. This hints at a possible deeper connection between bothmore » forms of the equation. We also sketch a generalisation of Kurasov’s [J. Math. Anal. Appl. 201(1), 297–323 (1996)] result to hypersurfaces. Second, we derive a new closed-form solution to Schrödinger’s integral equation with a delta prime potential. This potential has attracted considerable attention, including some controversy. Interestingly, the derived propagator satisfies boundary conditions that were previously derived using Schrödinger’s differential equation. Third, we derive boundary conditions for “super-singular” potentials given by higher-order derivatives of the delta potential. These boundary conditions cannot be incorporated into the normal framework of self-adjoint extensions. We show that the boundary conditions depend on the energy of the solution and that probability is conserved. This paper thereby confirms several seminal results and derives some new ones. In sum, it shows that Schrödinger’s integral equation is a viable tool for studying singular interactions in quantum mechanics.« less
Offerman, Theo; Palley, Asa B
2016-01-01
Strictly proper scoring rules are designed to truthfully elicit subjective probabilistic beliefs from risk neutral agents. Previous experimental studies have identified two problems with this method: (i) risk aversion causes agents to bias their reports toward the probability of [Formula: see text], and (ii) for moderate beliefs agents simply report [Formula: see text]. Applying a prospect theory model of risk preferences, we show that loss aversion can explain both of these behavioral phenomena. Using the insights of this model, we develop a simple off-the-shelf probability assessment mechanism that encourages loss-averse agents to report true beliefs. In an experiment, we demonstrate the effectiveness of this modification in both eliminating uninformative reports and eliciting true probabilistic beliefs.
Forecasting the value of credit scoring
NASA Astrophysics Data System (ADS)
Saad, Shakila; Ahmad, Noryati; Jaffar, Maheran Mohd
2017-08-01
Nowadays, credit scoring system plays an important role in banking sector. This process is important in assessing the creditworthiness of customers requesting credit from banks or other financial institutions. Usually, the credit scoring is used when customers send the application for credit facilities. Based on the score from credit scoring, bank will be able to segregate the "good" clients from "bad" clients. However, in most cases the score is useful at that specific time only and cannot be used to forecast the credit worthiness of the same applicant after that. Hence, bank will not know if "good" clients will always be good all the time or "bad" clients may become "good" clients after certain time. To fill up the gap, this study proposes an equation to forecast the credit scoring of the potential borrowers at a certain time by using the historical score related to the assumption. The Mean Absolute Percentage Error (MAPE) is used to measure the accuracy of the forecast scoring. Result shows the forecast scoring is highly accurate as compared to actual credit scoring.
A Simulation Study Comparing Procedures for Assessing Individual Educational Growth. Report No. 182.
ERIC Educational Resources Information Center
Richards, James M., Jr.
A computer simulation procedure was developed to reproduce the overall pattern of results obtained in the Educational Testing Service Growth Study. Then simulated data for seven sets of 10,000 to 15,000 cases were analyzed, and findings compared on the basis of correlations between estimated and true growth scores. Findings showed that growth was…
David T. Butry
2009-01-01
This paper examines the effect wildfire mitigation has on broad-scale wildfire behavior. Each year, hundreds of million of dollars are spent on fire suppression and fuels management applications, yet little is known, quantitatively, of the returns to these programs in terms of their impact on wildfire extent and intensity. This is especially true when considering that...
A Measure for the Reliability of a Rating Scale Based on Longitudinal Clinical Trial Data
ERIC Educational Resources Information Center
Laenen, Annouschka; Alonso, Ariel; Molenberghs, Geert
2007-01-01
A new measure for reliability of a rating scale is introduced, based on the classical definition of reliability, as the ratio of the true score variance and the total variance. Clinical trial data can be employed to estimate the reliability of the scale in use, whenever repeated measurements are taken. The reliability is estimated from the…
ERIC Educational Resources Information Center
Van Duzer, Eric
2011-01-01
This report introduces a short, hands-on activity that addresses a key challenge in teaching quantitative methods to students who lack confidence or experience with statistical analysis. Used near the beginning of the course, this activity helps students develop an intuitive insight regarding a number of abstract concepts which are key to…
ERIC Educational Resources Information Center
Loving, Kirk Anthony
2017-01-01
As students continue to experience low test scores on national civics assessments, it is important to identify curriculum which can increase their civic capabilities. This is especially true for the quickly growing Hispanic population, which suffers a civic achievement gap. The purpose of this quantitative quasi-experimental nonequivalent…
Han, Dianwei; Zhang, Jun; Tang, Guiliang
2012-01-01
An accurate prediction of the pre-microRNA secondary structure is important in miRNA informatics. Based on a recently proposed model, nucleotide cyclic motifs (NCM), to predict RNA secondary structure, we propose and implement a Modified NCM (MNCM) model with a physics-based scoring strategy to tackle the problem of pre-microRNA folding. Our microRNAfold is implemented using a global optimal algorithm based on the bottom-up local optimal solutions. Our experimental results show that microRNAfold outperforms the current leading prediction tools in terms of True Negative rate, False Negative rate, Specificity, and Matthews coefficient ratio.
Determining beef carcass retail product and fat yields within 1 hour postmortem.
Apple, J K; Dikeman, M E; Cundiff, L V; Wise, J W
1991-12-01
Hot carcasses from 220 steers (progeny of Hereford or Angus dams mated to Angus, Charolais, Galloway, Gelbvieh, Hereford, Longhorn, Nellore, Piedmontese, Pinzgauer, Salers, or Shorthorn sires) were used to develop equations to estimate weights and percentages of retail product (RP) and trimmable fat (TF) yields. Independent variables examined were 1) 12-13th rib fat probe (12RFD), 2) 10-11th rib fat probe (10RFD), 3) external fat score (EFS), 4) percentage of internal fat estimated hot (H%KPH), 5) hindquarter muscling score (HQMS), and 6) hot carcass weight (HCW). Right sides of the carcasses were fabricated into boneless retail cuts, trimmed to .76 cm of subcutaneous and visible intermuscular fat, and weighed. Cuts were trimmed to 0 cm of subcutaneous and visible intermuscular fat and reweighed. Multiple linear regression equations containing 12RFD, EFS, H%KPH, and HCW accounted for 95 and 89% of the variation in weight of total RP at .76 and 0 cm of fat trim, respectively. When weights of RP from the four primal cuts (.76 and 0 cm of fat trim) were the dependent variables, equations consisting of 12RFD, EFS, H%KPH, and HCW accounted for 93 to 84% of the variation. Hot carcass equations accounted for 83% of the variation in weight of total TF at both .76 and 0 cm of fat trim. Furthermore, equations from hot carcass data accounted for 54 and 51% of the variation in percentage of total RP and 57 and 50% of the variation in percentage of RP from the four primal cuts at .76 and 0 cm of fat trim, respectively. Hot carcass prediction equations accounted for 72% of the variation in percentage of total TF at both fat trim levels. Hot carcass equations were equivalent or superior to equations formulated from chilled carcass traits.
Anomalous diffusion and long-range correlations in the score evolution of the game of cricket
NASA Astrophysics Data System (ADS)
Ribeiro, Haroldo V.; Mukherjee, Satyam; Zeng, Xiao Han T.
2012-08-01
We investigate the time evolution of the scores of the second most popular sport in the world: the game of cricket. By analyzing, event by event, the scores of more than 2000 matches, we point out that the score dynamics is an anomalous diffusive process. Our analysis reveals that the variance of the process is described by a power-law dependence with a superdiffusive exponent, that the scores are statistically self-similar following a universal Gaussian distribution, and that there are long-range correlations in the score evolution. We employ a generalized Langevin equation with a power-law correlated noise that describes all the empirical findings very well. These observations suggest that competition among agents may be a mechanism leading to anomalous diffusion and long-range correlation.
ERIC Educational Resources Information Center
Puhan, Gautam
2010-01-01
This study used real data to construct testing conditions for comparing results of chained linear, Tucker, and Levine-observed score equatings. The comparisons were made under conditions where the new- and old-form samples were similar in ability and when they differed in ability. The length of the anchor test was also varied to enable examination…
ERIC Educational Resources Information Center
Wang, Wei
2013-01-01
Mixed-format tests containing both multiple-choice (MC) items and constructed-response (CR) items are now widely used in many testing programs. Mixed-format tests often are considered to be superior to tests containing only MC items although the use of multiple item formats leads to measurement challenges in the context of equating conducted under…
Ferrario, Marco M; Veronesi, Giovanni; Chambless, Lloyd E; Tunstall-Pedoe, Hugh; Kuulasmaa, Kari; Salomaa, Veikko; Borglykke, Anders; Hart, Nigel; Söderberg, Stefan; Cesana, Giancarlo
2014-08-01
To assess whether educational class, an index of socioeconomic position, improves the accuracy of the SCORE cardiovascular disease (CVD) risk prediction equation. In a pooled analysis of 68 455 40-64-year-old men and women, free from coronary heart disease at baseline, from 47 prospective population-based cohorts from Nordic countries (Finland, Denmark, Sweden), the UK (Northern Ireland, Scotland), Central Europe (France, Germany, Italy) and Eastern Europe (Lithuania, Poland) and Russia, we assessed improvements in discrimination and in risk classification (net reclassification improvement (NRI)) when education was added to models including the SCORE risk equation. The lowest educational class was associated with higher CVD mortality in men (pooled age-adjusted HR=1.64, 95% CI 1.42 to 1.90) and women (HR=1.31, 1.02 to 1.68). In men, the HRs ranged from 1.3 (Central Europe) to 2.1 (Eastern Europe and Russia). After adjustment for the SCORE risk, the association remained statistically significant overall, in the UK and Eastern Europe and Russia. Education significantly improved discrimination in all European regions and classification in Nordic countries (clinical NRI=5.3%) and in Eastern Europe and Russia (NRI=24.7%). In women, after SCORE risk adjustment, the association was not statistically significant, but the reduced number of deaths plays a major role, and the addition of education led to improvements in discrimination and classification in the Nordic countries only. We recommend the inclusion of education in SCORE CVD risk equation in men, particularly in Nordic and East European countries, to improve social equity in primary prevention. Weaker evidence for women warrants the need for further investigations. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://group.bmj.com/group/rights-licensing/permissions.
de Broglie-Proca and Bopp-Podolsky massive photon gases in cosmology
NASA Astrophysics Data System (ADS)
Cuzinatto, R. R.; de Morais, E. M.; Medeiros, L. G.; Naldoni de Souza, C.; Pimentel, B. M.
2017-04-01
We investigate the influence of massive photons on the evolution of the expanding universe. Two particular models for generalized electrodynamics are considered, namely de Broglie-Proca and Bopp-Podolsky electrodynamics. We obtain the equation of state (EOS) P=P(\\varepsilon) for each case using dispersion relations derived from both theories. The EOS are inputted into the Friedmann equations of a homogeneous and isotropic space-time to determine the cosmic scale factor a(t). It is shown that the photon non-null mass does not significantly alter the result a\\propto t1/2 valid for a massless photon gas; this is true either in de Broglie-Proca's case (where the photon mass m is extremely small) or in Bopp-Podolsky theory (for which m is extremely large).
NASA Technical Reports Server (NTRS)
Kleinstein, G. G.; Gunzburger, M. D.
1976-01-01
An integral conservation law for wave numbers is considered. In order to test the validity of the proposed conservation law, a complete solution for the reflection and transmission of an acoustic wave impinging normally on a material interface moving at a constant speed is derived. The agreement between the frequency condition thus deduced from the dynamic equations of motion and the frequency condition derived from the jump condition associated with the integral equation supports the proposed law as a true conservation law. Additional comparisons such as amplitude discontinuities and Snells' law in a moving media further confirm the stated proposition. Results are stated concerning frequency and wave number relations across a shock front as predicted by the proposed conservation law.
NASA Astrophysics Data System (ADS)
Cannoni, Mirco
2015-03-01
We show that the standard theory of thermal production and chemical decoupling of WIMPs is incomplete. The hypothesis that WIMPs are produced and decouple from a thermal bath implies that the rate equation the bath particles interacting with the WIMPs is an algebraic equation that constraints the actual WIMPs abundance to have a precise analytical form down to the temperature . The point , which coincides with the stationary point of the equation for the quantity , is where the maximum departure of the WIMPs abundance from the thermal value is reached. For each mass and total annihilation cross section , the temperature and the actual WIMPs abundance are exactly known. This value provides the true initial condition for the usual differential equation that have to be integrated in the interval . The matching of the two abundances at is continuous and differentiable. The dependence of the present relic abundance on the abundance at an intermediate temperature is an exact result. The exact theory suggests a new analytical approximation that furnishes the relic abundance accurate at the level of 1-2 % in the case of -wave and -wave scattering cross sections. We conclude the paper studying the evolution of the WIMPs chemical potential and the entropy production using methods of non-equilibrium thermodynamics.
Kinetic Alfvén solitary and rogue waves in superthermal plasmas
NASA Astrophysics Data System (ADS)
Bains, A. S.; Li, Bo; Xia, Li-Dong
2014-03-01
We investigate the small but finite amplitude solitary Kinetic Alfvén waves (KAWs) in low β plasmas with superthermal electrons modeled by a kappa-type distribution. A nonlinear Korteweg-de Vries (KdV) equation describing the evolution of KAWs is derived by using the standard reductive perturbation method. Examining the dependence of the nonlinear and dispersion coefficients of the KdV equation on the superthermal parameter κ, plasma β, and obliqueness of propagation, we show that these parameters may change substantially the shape and size of solitary KAW pulses. Only sub-Alfvénic, compressive solitons are supported. We then extend the study to examine kinetic Alfvén rogue waves by deriving a nonlinear Schrödinger equation from the KdV equation. Rational solutions that form rogue wave envelopes are obtained. We examine how the behavior of rogue waves depends on the plasma parameters in question, finding that the rogue envelopes are lowered with increasing electron superthermality whereas the opposite is true when the plasma β increases. The findings of this study may find applications to low β plasmas in astrophysical environments where particles are superthermally distributed.
Survey of Army/NASA Rotorcraft Aeroelastic Stability Research
1988-10-01
modal analysis of aeroelastic sLaoili:v of .niform 5ant:- lever rotor blades that clearlv .llustra:ea the significar: ;.fl- ence : :ne -cn - ear bending... ence 8, the Newtonian approach does, not necessarily yield a syMetriC structural operator and althort3. the equations from the two methods are not... ence 69 to a true finite-element form so that the generalized coorainates were actual displacements and slopes at ends of the element. In addition to the
Pulsed Thrust Method for Hover Formation Flying
NASA Technical Reports Server (NTRS)
Hope, Alan; Trask, Aaron
2003-01-01
A non-continuous thrust method for hover type formation flying has been developed. This method differs from a true hover which requires constant range and bearing from a reference vehicle. The new method uses a pulsed loop, or pogo, maneuver sequence that keeps the follower spacecraft within a defined box in a near hover situation. Equations are developed for the hover maintenance maneuvers. The constraints on the hover location, pulse interval, and maximum/minimum ranges are discussed.
Problems of interaction longitudinal shear waves with V-shape tunnels defect
NASA Astrophysics Data System (ADS)
Popov, V. G.
2018-04-01
The problem of determining the two-dimensional dynamic stress state near a tunnel defect of V-shaped cross-section is solved. The defect is located in an infinite elastic medium, where harmonic longitudinal shear waves are propagating. The initial problem is reduced to a system of two singular integral or integro-differential equations with fixed singularities. A numerical method for solving these systems with regard to the true asymptotics of the unknown functions is developed.
ERIC Educational Resources Information Center
Culpepper, Steven Andrew
2013-01-01
A classic topic in the fields of psychometrics and measurement has been the impact of the number of scale categories on test score reliability. This study builds on previous research by further articulating the relationship between item response theory (IRT) and classical test theory (CTT). Equations are presented for comparing the reliability and…