An Elasto-Plastic Damage Model for Rocks Based on a New Nonlinear Strength Criterion
NASA Astrophysics Data System (ADS)
Huang, Jingqi; Zhao, Mi; Du, Xiuli; Dai, Feng; Ma, Chao; Liu, Jingbo
2018-05-01
The strength and deformation characteristics of rocks are the most important mechanical properties for rock engineering constructions. A new nonlinear strength criterion is developed for rocks by combining the Hoek-Brown (HB) criterion and the nonlinear unified strength criterion (NUSC). The proposed criterion takes account of the intermediate principal stress effect against HB criterion, as well as being nonlinear in the meridian plane against NUSC. Only three parameters are required to be determined by experiments, including the two HB parameters σ c and m i . The failure surface of the proposed criterion is continuous, smooth and convex. The proposed criterion fits the true triaxial test data well and performs better than the other three existing criteria. Then, by introducing the Geological Strength Index, the proposed criterion is extended to rock masses and predicts the test data well. Finally, based on the proposed criterion, a triaxial elasto-plastic damage model for intact rock is developed. The plastic part is based on the effective stress, whose yield function is developed by the proposed criterion. For the damage part, the evolution function is assumed to have an exponential form. The performance of the constitutive model shows good agreement with the results of experimental tests.
Robust signal recovery using the prolate spherical wave functions and maximum correntropy criterion
NASA Astrophysics Data System (ADS)
Zou, Cuiming; Kou, Kit Ian
2018-05-01
Signal recovery is one of the most important problem in signal processing. This paper proposes a novel signal recovery method based on prolate spherical wave functions (PSWFs). PSWFs are a kind of special functions, which have been proved having good performance in signal recovery. However, the existing PSWFs based recovery methods used the mean square error (MSE) criterion, which depends on the Gaussianity assumption of the noise distributions. For the non-Gaussian noises, such as impulsive noise or outliers, the MSE criterion is sensitive, which may lead to large reconstruction error. Unlike the existing PSWFs based recovery methods, our proposed PSWFs based recovery method employs the maximum correntropy criterion (MCC), which is independent of the noise distribution. The proposed method can reduce the impact of the large and non-Gaussian noises. The experimental results on synthetic signals with various types of noises show that the proposed MCC based signal recovery method has better robust property against various noises compared to other existing methods.
A machine-learned computational functional genomics-based approach to drug classification.
Lötsch, Jörn; Ultsch, Alfred
2016-12-01
The public accessibility of "big data" about the molecular targets of drugs and the biological functions of genes allows novel data science-based approaches to pharmacology that link drugs directly with their effects on pathophysiologic processes. This provides a phenotypic path to drug discovery and repurposing. This paper compares the performance of a functional genomics-based criterion to the traditional drug target-based classification. Knowledge discovery in the DrugBank and Gene Ontology databases allowed the construction of a "drug target versus biological process" matrix as a combination of "drug versus genes" and "genes versus biological processes" matrices. As a canonical example, such matrices were constructed for classical analgesic drugs. These matrices were projected onto a toroid grid of 50 × 82 artificial neurons using a self-organizing map (SOM). The distance, respectively, cluster structure of the high-dimensional feature space of the matrices was visualized on top of this SOM using a U-matrix. The cluster structure emerging on the U-matrix provided a correct classification of the analgesics into two main classes of opioid and non-opioid analgesics. The classification was flawless with both the functional genomics and the traditional target-based criterion. The functional genomics approach inherently included the drugs' modulatory effects on biological processes. The main pharmacological actions known from pharmacological science were captures, e.g., actions on lipid signaling for non-opioid analgesics that comprised many NSAIDs and actions on neuronal signal transmission for opioid analgesics. Using machine-learned techniques for computational drug classification in a comparative assessment, a functional genomics-based criterion was found to be similarly suitable for drug classification as the traditional target-based criterion. This supports a utility of functional genomics-based approaches to computational system pharmacology for drug discovery and repurposing.
Base-Rate Neglect as a Function of Base Rates in Probabilistic Contingency Learning
ERIC Educational Resources Information Center
Kutzner, Florian; Freytag, Peter; Vogel, Tobias; Fiedler, Klaus
2008-01-01
When humans predict criterion events based on probabilistic predictors, they often lend excessive weight to the predictor and insufficient weight to the base rate of the criterion event. In an operant analysis, using a matching-to-sample paradigm, Goodie and Fantino (1996) showed that humans exhibit base-rate neglect when predictors are associated…
Criterion for estimation of stress-deformed state of SD-materials
NASA Astrophysics Data System (ADS)
Orekhov, Andrey V.
2018-05-01
A criterion is proposed that determines the moment when the growth pattern of the monotonic numerical sequence varies from the linear to the parabolic one. The criterion is based on the comparison of squares of errors for the linear and the incomplete quadratic approximation. The approximating functions are constructed locally, only at those points that are located near a possible change in nature of the increase in the sequence.
Development of an updated tensile neck injury criterion.
Parr, Jeffrey C; Miller, Michael E; Schubert Kabban, Christine M; Pellettiere, Joseph A; Perry, Chris E
2014-10-01
Ejection neck safety remains a concern in military aviation with the growing use of helmet mounted displays (HMDs) worn for entire mission durations. The original USAF tensile neck injury criterion proposed by Carter et al. (4) is updated and an injury protection limit for tensile loading is presented to evaluate escape system and HMD safety. An existent tensile neck injury criterion was updated through the addition of newer post mortem human subject (PMHS) tensile loading and injury data and the application of Survival Analysis to account for censoring in this data. The updated risk function was constructed with a combined human subject (N = 208) and PMHS (N = 22) data set. An updated AIS 3+ tensile neck injury criterion is proposed based upon human and PMHS data. This limit is significantly more conservative than the criterion proposed by Carter in 2000, yielding a 5% risk of AIS 3+ injury at a force of 1136 N as compared to a corresponding force of 1559 N. The inclusion of recent PMHS data into the original tensile neck injury criterion results in an injury protection limit that is significantly more conservative, as recent PMHS data is substantially less censored than the PMHS data included in the earlier criterion. The updated tensile risk function developed in this work is consistent with the tensile risk function published by the Federal Aviation Administration used as the basis for their neck injury criterion for side facing aircraft seats.
What Is True Halving in the Payoff Matrix of Game Theory?
Hasegawa, Eisuke; Yoshimura, Jin
2016-01-01
In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player’s current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated. PMID:27487194
What Is True Halving in the Payoff Matrix of Game Theory?
Ito, Hiromu; Katsumata, Yuki; Hasegawa, Eisuke; Yoshimura, Jin
2016-01-01
In game theory, there are two social interpretations of rewards (payoffs) for decision-making strategies: (1) the interpretation based on the utility criterion derived from expected utility theory and (2) the interpretation based on the quantitative criterion (amount of gain) derived from validity in the empirical context. A dynamic decision theory has recently been developed in which dynamic utility is a conditional (state) variable that is a function of the current wealth of a decision maker. We applied dynamic utility to the equal division in dove-dove contests in the hawk-dove game. Our results indicate that under the utility criterion, the half-share of utility becomes proportional to a player's current wealth. Our results are consistent with studies of the sense of fairness in animals, which indicate that the quantitative criterion has greater validity than the utility criterion. We also find that traditional analyses of repeated games must be reevaluated.
Job shop scheduling problem with late work criterion
NASA Astrophysics Data System (ADS)
Piroozfard, Hamed; Wong, Kuan Yew
2015-05-01
Scheduling is considered as a key task in many industries, such as project based scheduling, crew scheduling, flight scheduling, machine scheduling, etc. In the machine scheduling area, the job shop scheduling problems are considered to be important and highly complex, in which they are characterized as NP-hard. The job shop scheduling problems with late work criterion and non-preemptive jobs are addressed in this paper. Late work criterion is a fairly new objective function. It is a qualitative measure and concerns with late parts of the jobs, unlike classical objective functions that are quantitative measures. In this work, simulated annealing was presented to solve the scheduling problem. In addition, operation based representation was used to encode the solution, and a neighbourhood search structure was employed to search for the new solutions. The case studies are Lawrence instances that were taken from the Operations Research Library. Computational results of this probabilistic meta-heuristic algorithm were compared with a conventional genetic algorithm, and a conclusion was made based on the algorithm and problem.
Steele, Catriona M.; Namasivayam-MacDonald, Ashwini M.; Guida, Brittany T.; Cichero, Julie A.; Duivestein, Janice; MRSc; Hanson, Ben; Lam, Peter; Riquelme, Luis F.
2018-01-01
Objective To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Design Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Setting Web-based survey. Participants Respondents (NZ170) from 29 countries. Interventions Not applicable. Main Outcome Measures Consensual validity (percent agreement and Kendall t), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). Results The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. Conclusions This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. PMID:29428348
Steele, Catriona M; Namasivayam-MacDonald, Ashwini M; Guida, Brittany T; Cichero, Julie A; Duivestein, Janice; Hanson, Ben; Lam, Peter; Riquelme, Luis F
2018-05-01
To assess consensual validity, interrater reliability, and criterion validity of the International Dysphagia Diet Standardisation Initiative Functional Diet Scale, a new functional outcome scale intended to capture the severity of oropharyngeal dysphagia, as represented by the degree of diet texture restriction recommended for the patient. Participants assigned International Dysphagia Diet Standardisation Initiative Functional Diet Scale scores to 16 clinical cases. Consensual validity was measured against reference scores determined by an author reference panel. Interrater reliability was measured overall and across quartile subsets of the dataset. Criterion validity was evaluated versus Functional Oral Intake Scale (FOIS) scores assigned by survey respondents to the same case scenarios. Feedback was requested regarding ease and likelihood of use. Web-based survey. Respondents (N=170) from 29 countries. Not applicable. Consensual validity (percent agreement and Kendall τ), criterion validity (Spearman rank correlation), and interrater reliability (Kendall concordance and intraclass coefficients). The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed strong consensual validity, criterion validity, and interrater reliability. Scenarios involving liquid-only diets, transition from nonoral feeding, or trial diet advances in therapy showed the poorest consensus, indicating a need for clear instructions on how to score these situations. The International Dysphagia Diet Standardisation Initiative Functional Diet Scale showed greater sensitivity than the FOIS to specific changes in diet. Most (>70%) respondents indicated enthusiasm for implementing the International Dysphagia Diet Standardisation Initiative Functional Diet Scale. This initial validation study suggests that the International Dysphagia Diet Standardisation Initiative Functional Diet Scale has strong consensual and criterion validity and can be used reliably by clinicians to capture diet texture restriction and progression in people with dysphagia. Copyright © 2018 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Why noise is useful in functional and neural mechanisms of interval timing?
2013-01-01
Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391
Robust stability of bidirectional associative memory neural networks with time delays
NASA Astrophysics Data System (ADS)
Park, Ju H.
2006-01-01
Based on the Lyapunov Krasovskii functionals combined with linear matrix inequality approach, a novel stability criterion is proposed for asymptotic stability of bidirectional associative memory neural networks with time delays. A novel delay-dependent stability criterion is given in terms of linear matrix inequalities, which can be solved easily by various optimization algorithms.
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Development of Internet-Based Tasks for the Executive Function Performance Test.
Rand, Debbie; Lee Ben-Haim, Keren; Malka, Rachel; Portnoy, Sigal
The Executive Function Performance Test (EFPT) is a reliable and valid performance-based tool to assess executive functions (EFs). This study's objective was to develop and verify two Internet-based tasks for the EFPT. A cross-sectional study assessed the alternate-form reliability of the Internet-based bill-paying and telephone-use tasks in healthy adults and people with subacute stroke (Study 1). It also sought to establish the tasks' criterion reliability for assessing EF deficits by correlating performance with that on the Trail Making Test in five groups: healthy young adults, healthy older adults, people with subacute stroke, people with chronic stroke, and young adults with attention deficit hyperactivity disorder (Study 2). The alternative-form reliability and initial construct validity for the Internet-based bill-paying task were verified. Criterion validity was established for both tasks. The Internet-based tasks are comparable to the original EFPT tasks and can be used for assessment of EF deficits. Copyright © 2018 by the American Occupational Therapy Association, Inc.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
ERIC Educational Resources Information Center
Maljaars, Jarymke; Noens, Ilse; Scholte, Evert; van Berckelaer-Onnes, Ina
2012-01-01
The Diagnostic Interview for Social and Communication Disorders (DISCO; Wing, 2006) is a standardized, semi-structured and interviewer-based schedule for diagnosis of autism spectrum disorder (ASD). The objective of this study was to evaluate the criterion and convergent validity of the DISCO-11 ICD-10 algorithm in young and low-functioning…
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
Flaw Tolerance In Lap Shear Brazed Joints. Part 2
NASA Technical Reports Server (NTRS)
Wang, Len; Flom, Yury
2003-01-01
This paper presents results of the second part of an on-going effort to gain better understanding of defect tolerance in braze joints. In the first part of this three-part series, we mechanically tested and modeled the strength of the lap joints as a function of the overlap distance. A failure criterion was established based on the zone damage theory, which predicts the dependence of the lap joint shear strength on the overlap distance, based on the critical size of a finite damage zone or an overloaded region in the joint. In this second part of the study, we experimentally verified the applicability of the damage zone criterion on prediction of the shear strength of the lap joint and introduced controlled flaws into the lap joints. The purpose of the study was to evaluate the lap joint strength as a function of flaw size and its location through mechanical testing and nonlinear finite element analysis (FEA) employing damage zone criterion for definition of failure. The results obtained from the second part of the investigation confirmed that the failure of the ductile lap shear brazed joints occurs when the damage zone reaches approximately 10% of the overlap width. The same failure criterion was applicable to the lap joints containing flaws.
Revealing Hidden Einstein-Podolsky-Rosen Nonlocality
NASA Astrophysics Data System (ADS)
Walborn, S. P.; Salles, A.; Gomes, R. M.; Toscano, F.; Souto Ribeiro, P. H.
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid’s EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states.
A Generalized Evolution Criterion in Nonequilibrium Convective Systems
NASA Astrophysics Data System (ADS)
Ichiyanagi, Masakazu; Nisizima, Kunisuke
1989-04-01
A general evolution criterion, applicable to transport processes such as the conduction of heat and mass diffusion, is obtained as a direct version of the Le Chatelier-Braun principle for stationary states. The present theory is not based on any radical departure from the conventional one. The generalized theory is made determinate by proposing the balance equations for extensive thermodynamic variables which will reflect the character of convective systems under the assumption of local equilibrium. As a consequence of the introduction of source terms in the balance equations, there appear additional terms in the expression of the local entropy production, which are bilinear in terms of the intensive variables and the sources. In the present paper, we show that we can construct a dissipation function for such general cases, in which the premises of the Glansdorff-Prigogine theory are accumulated. The new dissipation function permits us to formulate a generalized evolution criterion for convective systems.
Optimization of Thermal Object Nonlinear Control Systems by Energy Efficiency Criterion.
NASA Astrophysics Data System (ADS)
Velichkin, Vladimir A.; Zavyalov, Vladimir A.
2018-03-01
This article presents the results of thermal object functioning control analysis (heat exchanger, dryer, heat treatment chamber, etc.). The results were used to determine a mathematical model of the generalized thermal control object. The appropriate optimality criterion was chosen to make the control more energy-efficient. The mathematical programming task was formulated based on the chosen optimality criterion, control object mathematical model and technological constraints. The “maximum energy efficiency” criterion helped avoid solving a system of nonlinear differential equations and solve the formulated problem of mathematical programming in an analytical way. It should be noted that in the case under review the search for optimal control and optimal trajectory reduces to solving an algebraic system of equations. In addition, it is shown that the optimal trajectory does not depend on the dynamic characteristics of the control object.
Revealing hidden Einstein-Podolsky-Rosen nonlocality.
Walborn, S P; Salles, A; Gomes, R M; Toscano, F; Souto Ribeiro, P H
2011-04-01
Steering is a form of quantum nonlocality that is intimately related to the famous Einstein-Podolsky-Rosen (EPR) paradox that ignited the ongoing discussion of quantum correlations. Within the hierarchy of nonlocal correlations appearing in nature, EPR steering occupies an intermediate position between Bell nonlocality and entanglement. In continuous variable systems, EPR steering correlations have been observed by violation of Reid's EPR inequality, which is based on inferred variances of complementary observables. Here we propose and experimentally test a new criterion based on entropy functions, and show that it is more powerful than the variance inequality for identifying EPR steering. Using the entropic criterion our experimental results show EPR steering, while the variance criterion does not. Our results open up the possibility of observing this type of nonlocality in a wider variety of quantum states. © 2011 American Physical Society
Lansing, Amy E.; Plante, Wendy Y.; Beck, Audrey N.
2016-01-01
Despite growing recognition that cumulative adversity (total stressor exposure), including complex trauma, increases the risk for psychopathology and impacts development, assessment strategies lag behind: Trauma-related mental health needs (symptoms, functional impairment, maladaptive coping) are typically assessed in response to only one qualifying Criterion-A event. This is especially problematic for youth at-risk for health and academic disparities who experience cumulative adversity, including non-qualifying events (parental separations) which may produce more impairing symptomatology. Data from 118 delinquent girls demonstrate: 1) an average of 14 adverse Criterion-A and non-Criterion event exposures; 2) serious maladaptive coping strategies (self-injury) directly in response to cumulative adversity; 3) more cumulative adversity-related than worst-event related symptomatology and functional impairment; and 4) comparable symptomatology, but greater functional impairment, in response to non-Criterion events. These data support the evaluation of mental health needs in response to cumulative adversity for optimal identification and tailoring of services in high-risk populations to reduce disparities. PMID:27745922
Lansing, Amy E; Plante, Wendy Y; Beck, Audrey N
2017-05-01
Despite growing recognition that cumulative adversity (total stressor exposure, including complex trauma), increases the risk for psychopathology and impacts development, assessment strategies lag behind: Adversity-related mental health needs (symptoms, functional impairment, maladaptive coping) are typically assessed in response to only one qualifying Criterion-A traumatic event. This is especially problematic for youth at-risk for health and academic disparities who experience cumulative adversity, including non-qualifying events (separation from caregivers) which may produce more impairing symptomatology. Data from 118 delinquent girls demonstrate: (1) an average of 14 adverse Criterion-A and non-Criterion event exposures; (2) serious maladaptive coping strategies (self-injury) directly in response to cumulative adversity; (3) more cumulative adversity-related than worst-event related symptomatology and functional impairment; and (4) comparable symptomatology, but greater functional impairment, in response to non-Criterion events. These data support the evaluation of mental health needs in response to cumulative adversity for optimal identification and tailoring of services in high-risk populations to reduce disparities. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Zhu, Qi-Zhi
2017-02-01
A proper criterion describing when material fails is essential for deep understanding and constitutive modeling of rock damage and failure by microcracking. Physically, such a criterion should be the global effect of local mechanical response and microstructure evolution inside the material. This paper aims at deriving a new mechanisms-based failure criterion for brittle rocks, based on micromechanical unilateral damage-friction coupling analyses rather than on the basic results from the classical linear elastic fracture mechanics. The failure functions respectively describing three failure modes (purely tensile mode, tensile-shear mode as well as compressive-shear mode) are achieved in a unified upscaling framework and illustrated in the Mohr plane and also in the plane of principal stresses. The strength envelope is proved to be continuous and smooth with a compressive to tensile strength ratio dependent on material properties. Comparisons with experimental data are finally carried out. By this work, we also provide a theoretical evidence on the hybrid failure and the smooth transition from tensile failure to compressive-shear failure.
A novel SURE-based criterion for parametric PSF estimation.
Xue, Feng; Blu, Thierry
2015-02-01
We propose an unbiased estimate of a filtered version of the mean squared error--the blur-SURE (Stein's unbiased risk estimate)--as a novel criterion for estimating an unknown point spread function (PSF) from the degraded image only. The PSF is obtained by minimizing this new objective functional over a family of Wiener processings. Based on this estimated blur kernel, we then perform nonblind deconvolution using our recently developed algorithm. The SURE-based framework is exemplified with a number of parametric PSF, involving a scaling factor that controls the blur size. A typical example of such parametrization is the Gaussian kernel. The experimental results demonstrate that minimizing the blur-SURE yields highly accurate estimates of the PSF parameters, which also result in a restoration quality that is very similar to the one obtained with the exact PSF, when plugged into our recent multi-Wiener SURE-LET deconvolution algorithm. The highly competitive results obtained outline the great potential of developing more powerful blind deconvolution algorithms based on SURE-like estimates.
Rowe, Penny M; Neshyba, Steven P; Walden, Von P
2011-03-14
An analytical expression for the variance of the radiance measured by Fourier-transform infrared (FTIR) emission spectrometers exists only in the limit of low noise. Outside this limit, the variance needs to be calculated numerically. In addition, a criterion for low noise is needed to identify properly calibrated radiances and optimize the instrument bandwidth. In this work, the variance and the magnitude of a noise-dependent spectral bias are calculated as a function of the system responsivity (r) and the noise level in its estimate (σr). The criterion σr/r<0.3, applied to downwelling and upwelling FTIR emission spectra, shows that the instrument bandwidth is specified properly for one instrument but needs to be restricted for another.
Vasilyev, K N
2013-01-01
When developing new software products and adapting existing software, project leaders have to decide which functionalities to keep, adapt or develop. They have to consider that the cost of making errors during the specification phase is extremely high. In this paper a formalised approach is proposed that considers the main criteria for selecting new software functions. The application of this approach minimises the chances of making errors in selecting the functions to apply. Based on the work on software development and support projects in the area of water resources and flood damage evaluation in economic terms at CH2M HILL (the developers of the flood modelling package ISIS), the author has defined seven criteria for selecting functions to be included in a software product. The approach is based on the evaluation of the relative significance of the functions to be included into the software product. Evaluation is achieved by considering each criterion and the weighting coefficients of each criterion in turn and applying the method of normalisation. This paper includes a description of this new approach and examples of its application in the development of new software products in the are of the water resources management.
Detection of fallen trees in ALS point clouds using a Normalized Cut approach trained by simulation
NASA Astrophysics Data System (ADS)
Polewski, Przemyslaw; Yao, Wei; Heurich, Marco; Krzystek, Peter; Stilla, Uwe
2015-07-01
Downed dead wood is regarded as an important part of forest ecosystems from an ecological perspective, which drives the need for investigating its spatial distribution. Based on several studies, Airborne Laser Scanning (ALS) has proven to be a valuable remote sensing technique for obtaining such information. This paper describes a unified approach to the detection of fallen trees from ALS point clouds based on merging short segments into whole stems using the Normalized Cut algorithm. We introduce a new method of defining the segment similarity function for the clustering procedure, where the attribute weights are learned from labeled data. Based on a relationship between Normalized Cut's similarity function and a class of regression models, we show how to learn the similarity function by training a classifier. Furthermore, we propose using an appearance-based stopping criterion for the graph cut algorithm as an alternative to the standard Normalized Cut threshold approach. We set up a virtual fallen tree generation scheme to simulate complex forest scenarios with multiple overlapping fallen stems. This simulated data is then used as a basis to learn both the similarity function and the stopping criterion for Normalized Cut. We evaluate our approach on 5 plots from the strictly protected mixed mountain forest within the Bavarian Forest National Park using reference data obtained via a manual field inventory. The experimental results show that our method is able to detect up to 90% of fallen stems in plots having 30-40% overstory cover with a correctness exceeding 80%, even in quite complex forest scenes. Moreover, the performance for feature weights trained on simulated data is competitive with the case when the weights are calculated using a grid search on the test data, which indicates that the learned similarity function and stopping criterion can generalize well on new plots.
Event-based cluster synchronization of coupled genetic regulatory networks
NASA Astrophysics Data System (ADS)
Yue, Dandan; Guan, Zhi-Hong; Li, Tao; Liao, Rui-Quan; Liu, Feng; Lai, Qiang
2017-09-01
In this paper, the cluster synchronization of coupled genetic regulatory networks with a directed topology is studied by using the event-based strategy and pinning control. An event-triggered condition with a threshold consisting of the neighbors' discrete states at their own event time instants and a state-independent exponential decay function is proposed. The intra-cluster states information and extra-cluster states information are involved in the threshold in different ways. By using the Lyapunov function approach and the theories of matrices and inequalities, we establish the cluster synchronization criterion. It is shown that both the avoidance of continuous transmission of information and the exclusion of the Zeno behavior are ensured under the presented triggering condition. Explicit conditions on the parameters in the threshold are obtained for synchronization. The stability criterion of a single GRN is also given under the reduced triggering condition. Numerical examples are provided to validate the theoretical results.
Data mining in soft computing framework: a survey.
Mitra, S; Pal, S K; Mitra, P
2002-01-01
The present article provides a survey of the available literature on data mining using soft computing. A categorization has been provided based on the different soft computing tools and their hybridizations used, the data mining function implemented, and the preference criterion selected by the model. The utility of the different soft computing methodologies is highlighted. Generally fuzzy sets are suitable for handling the issues related to understandability of patterns, incomplete/noisy data, mixed media information and human interaction, and can provide approximate solutions faster. Neural networks are nonparametric, robust, and exhibit good learning and generalization capabilities in data-rich environments. Genetic algorithms provide efficient search algorithms to select a model, from mixed media data, based on some preference criterion/objective function. Rough sets are suitable for handling different types of uncertainty in data. Some challenges to data mining and the application of soft computing methodologies are indicated. An extensive bibliography is also included.
NASA Astrophysics Data System (ADS)
Divakar, L.; Babel, M. S.; Perret, S. R.; Gupta, A. Das
2011-04-01
SummaryThe study develops a model for optimal bulk allocations of limited available water based on an economic criterion to competing use sectors such as agriculture, domestic, industry and hydropower. The model comprises a reservoir operation module (ROM) and a water allocation module (WAM). ROM determines the amount of water available for allocation, which is used as an input to WAM with an objective function to maximize the net economic benefits of bulk allocations to different use sectors. The total net benefit functions for agriculture and hydropower sectors and the marginal net benefit from domestic and industrial sectors are established and are categorically taken as fixed in the present study. The developed model is applied to the Chao Phraya basin in Thailand. The case study results indicate that the WAM can improve net economic returns compared to the current water allocation practices.
ERIC Educational Resources Information Center
Tan, Xuan; Xiang, Bihua; Dorans, Neil J.; Qu, Yanxuan
2010-01-01
The nature of the matching criterion (usually the total score) in the study of differential item functioning (DIF) has been shown to impact the accuracy of different DIF detection procedures. One of the topics related to the nature of the matching criterion is whether the studied item should be included. Although many studies exist that suggest…
NASA Technical Reports Server (NTRS)
Lindenmeyer, P. H.
1983-01-01
The fracture criteria upon which most fracture mechanics is based involves an energy balance that is not appropriate for the fracture mechanics of viscoelastic materials such as polymer matrix composites. A more appropriate criterion based upon nonequilibrium thermodynamics and involving a power balance rather than an energy balance is proposed. This crierion is based upon a reformulation of the second law of thermodynamics which focuses attention on the total Legendre transform of energy expressed as a functional over time and space. This excess energy functional can be shown to be equivalent to the Rice J integral if the only irreversible process is the propogation of a single crack completely through the thickness of the specimen and if the crack propogation is assured to be independent of time. For the more general case of more than one crack in a viscoelastic medium integration over both time and space is required. Two experimentally measurable parameters are proposed which should permit the evaluation of this more general fracture criterion.
Escalante, Agustín; Haas, Roy W; del Rincón, Inmaculada
2004-01-01
Outcome assessment in patients with rheumatoid arthritis (RA) includes measurement of physical function. We derived a scale to quantify global physical function in RA, using three performance-based rheumatology function tests (RFTs). We measured grip strength, walking velocity, and shirt button speed in consecutive RA patients attending scheduled appointments at six rheumatology clinics, repeating these measurements after a median interval of 1 year. We extracted the underlying latent variable using principal component factor analysis. We used the Bayesian information criterion to assess the global physical function scale's cross-sectional fit to criterion standards. The criteria were joint tenderness, swelling, and deformity, pain, physical disability, current work status, and vital status at 6 years after study enrolment. We computed Guyatt's responsiveness statistic for improvement according to the American College of Rheumatology (ACR) definition. Baseline functional performance data were available for 777 patients, and follow-up data were available for 681. Mean ± standard deviation for each RFT at baseline were: grip strength, 14 ± 10 kg; walking velocity, 194 ± 82 ft/min; and shirt button speed, 7.1 ± 3.8 buttons/min. Grip strength and walking velocity departed significantly from normality. The three RFTs loaded strongly on a single factor that explained ≥70% of their combined variance. We rescaled the factor to vary from 0 to 100. Its mean ± standard deviation was 41 ± 20, with a normal distribution. The new global scale had a stronger fit than the primary RFT to most of the criterion standards. It correlated more strongly with physical disability at follow-up and was more responsive to improvement defined according to the ACR20 and ACR50 definitions. We conclude that a performance-based physical function scale extracted from three RFTs has acceptable distributional and measurement properties and is responsive to clinically meaningful change. It provides a parsimonious scale to measure global physical function in RA. PMID:15225367
Characterizing the functional MRI response using Tikhonov regularization.
Vakorin, Vasily A; Borowsky, Ron; Sarty, Gordon E
2007-09-20
The problem of evaluating an averaged functional magnetic resonance imaging (fMRI) response for repeated block design experiments was considered within a semiparametric regression model with autocorrelated residuals. We applied functional data analysis (FDA) techniques that use a least-squares fitting of B-spline expansions with Tikhonov regularization. To deal with the noise autocorrelation, we proposed a regularization parameter selection method based on the idea of combining temporal smoothing with residual whitening. A criterion based on a generalized chi(2)-test of the residuals for white noise was compared with a generalized cross-validation scheme. We evaluated and compared the performance of the two criteria, based on their effect on the quality of the fMRI response. We found that the regularization parameter can be tuned to improve the noise autocorrelation structure, but the whitening criterion provides too much smoothing when compared with the cross-validation criterion. The ultimate goal of the proposed smoothing techniques is to facilitate the extraction of temporal features in the hemodynamic response for further analysis. In particular, these FDA methods allow us to compute derivatives and integrals of the fMRI signal so that fMRI data may be correlated with behavioral and physiological models. For example, positive and negative hemodynamic responses may be easily and robustly identified on the basis of the first derivative at an early time point in the response. Ultimately, these methods allow us to verify previously reported correlations between the hemodynamic response and the behavioral measures of accuracy and reaction time, showing the potential to recover new information from fMRI data. 2007 John Wiley & Sons, Ltd
Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.
Deco, G; Schürmann, B
1999-05-15
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
NASA Technical Reports Server (NTRS)
Bosi, F.; Pellegrino, S.
2017-01-01
A molecular formulation of the onset of plasticity is proposed to assess temperature and strain rate effects in anisotropic semi-crystalline rubbery films. The presented plane stress criterion is based on the strain rate-temperature superposition principle and the cooperative theory of yielding, where some parameters are assumed to be material constants, while others are considered to depend on specific modes of deformation. An orthotropic yield function is developed for a linear low density polyethylene thin film. Uniaxial and biaxial inflation experiments were carried out to determine the yield stress of the membrane via a strain recovery method. It is shown that the 3% offset method predicts the uniaxial elastoplastic transition with good accuracy. Both the tensile yield points along the two principal directions of the film and the biaxial yield stresses are found to obey the superposition principle. The proposed yield criterion is compared against experimental measurements, showing excellent agreement over a wide range of deformation rates and temperatures.
[Treatment of acromion base fractures with double plates internal fixation].
Lü, Guo-Qiang; Zhu, Jun-Kun; Lan, Shu-Hua; Wu, Quan-Zhou; Zheng, Rong-Zong; Zheng, Chong-Wu
2013-09-01
To study clinical effects of double plates fixation for the treatment of acromion base fracutres. From January 2010 to May 2012, 7 patients with acromion base fractures were treated with double plates ORIF surgical treatment. There were 5 males and 2 females, with an average age of 36.3 years old (ranged, 24 to 62 years old). All fractures were acuted and closed injuries. The duration from injury to surgery was 4.6 days (ranged, 2 to 10 days). Hardegger functional criterion, Visual Analogue Scale (VAS) and complications of the patients were documented analysis. All the patients were followed up,and the duration ranged from 4 to 13 months (averaged 8.9 months). The healing duration of fractures ranged from 8 to 14 weeks without any infection, shoulder instability, subacromial impingement syndrome, nonunion and failure of internal fixation. At the latest follow-up, the VAS ranged from 0 to 5. According to Hardegger criterion, 2 patients got an excellent result, 4 good and 1 poor. Double plates ORIF plays a positive role in the treatment of acromion base fractures, which reduces complications and maximally restore the function of shoulder.
Rossi, Gina; Debast, Inge; van Alphen, S P J
2017-07-01
The dimensional personality disorders model in the Diagnostic and Statistical Manual (DSM)-5 section III conceptually differentiates impaired personality functioning (criterion A) from the presence of pathological traits (criterion B). This study is the first to specifically address the measurement of criterion A in older adults. Moreover, the convergent/divergent validity of criterion A and criterion B will be compared in younger and older age groups. The Severity Indices of Personality Functioning - Short Form (SIPP-SF) was administered in older (N = 171) and younger adults (N = 210). The factorial structure was analyzed with exploratory structural equation modeling. Differences in convergent/divergent validity between personality functioning (SIPP-SF) and pathological traits (Personality Inventory for DSM-5; Dimensional Assessment of Personality Pathology-Basic Questionnaire) were examined across age groups. Identity Integration, Relational Capacities, Responsibility, Self-Control, and Social Concordance were corroborated as higher order domains. Although the SIPP-SF domains measured unique variation, some high correlations with pathological traits referred to overlapping constructs. Moreover, in older adults, personality functioning was more strongly related to Psychoticism, Disinhibition, Antagonism and Dissocial Behavior compared to younger adults. The SIPP-SF construct validity was demonstrated in terms of a structure of five higher order domains of personality functioning. The instrument is promising as a possible measure of impaired personality functioning in older adults. As such, it is a useful clinical tool to follow up effects of therapy on levels of personality functioning. Moreover, traits were associated with different degrees of personality functioning across age groups.
Debast, Inge; Rossi, Gina; van Alphen, S P J
2018-04-01
The alternative model for personality disorders in the fifth edition of the Diagnostic and Statistical Manual of Mental Disorders ( DSM-5) is considered an important step toward a possibly better conceptualization of personality pathology in older adulthood, by the introduction of levels of personality functioning (Criterion A) and trait dimensions (Criterion B). Our main aim was to examine age-neutrality of the Short Form of the Severity Indices of Personality Problems (SIPP-SF; Criterion A) and Personality Inventory for DSM-5-Brief Form (PID-5-BF; Criterion B). Differential item functioning (DIF) analyses and more specifically the impact on scale level through differential test functioning (DTF) analyses made clear that the SIPP-SF was more age-neutral (6% DIF, only one of four domains showed DTF) than the PID-5-BF (25% DIF, all four tested domains had DTF) in a community sample of older and younger adults. Age differences in convergent validity also point in the direction of differences in underlying constructs. Concurrent and criterion validity in geriatric psychiatry inpatients suggest that both the SIPP-SF scales measuring levels of personality functioning (especially self-functioning) and the PID-5-BF might be useful screening measures in older adults despite age-neutrality not being confirmed.
Strength-based criterion shifts in recognition memory.
Singer, Murray
2009-10-01
In manipulations of stimulus strength between lists, a more lenient signal detection criterion is more frequently applied to a weak than to a strong stimulus class. However, with randomly intermixed weak and strong test probes, such a criterion shift often does not result. A procedure that has yielded delay-based within-list criterion shifts was applied to strength manipulations in recognition memory for categorized word lists. When participants made semantic ratings about each stimulus word, strength-based criterion shifts emerged regardless of whether words from pairs of categories were studied in separate blocks (Experiment 1) or in intermixed blocks (Experiment 2). In Experiment 3, the criterion shift persisted under the semantic-rating study task, but not under rote memorization. These findings suggest that continually adjusting the recognition decision criterion is cognitively feasible. They provide a technique for manipulating the criterion shift, and they identify competing theoretical accounts of these effects.
Examination of DSM-5 Section III avoidant personality disorder in a community sample.
Sellbom, Martin; Carmichael, Kieran L C; Liggett, Jacqueline
2017-11-01
The current research evaluated the continuity between DSM-5 Section II and Section III diagnostic operationalizations of avoidant personality disorder (AvPD). More specifically, the study had three aims: (1) to examine which personality constructs comprise the optimal trait constellation for AvPD; (2) to investigate the utility of the proposed structure of the Section III AvPD diagnosis, in regard to combining functional impairment (criterion A) and a dimensional measure of personality (criterion B) variables; and (3) to determine whether AvPD-specific impairment confers incremental meaningful contribution above and beyond general impairment in personality functioning. A mixed sample of 402 university and community participants was recruited, and they were administered multiple measures of Section II PD, personality traits, and personality impairment. A latent measurement model approach was used to analyse data. Results supported the general continuity between Section II and Section III of the DSM-5; however, three of the four main criterion B traits were the stronger predictors. There was also some support for the trait unassertiveness augmenting the criterion B trait profile. The combination of using functional impairment criteria (criterion A) and dimensional personality constructs (criterion B) in operationalizing AvPD was supported; however, the reliance of disorder-specific over general impairment for criterion A was not supported. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Assessing Upper Extremity Motor Function in Practice of Virtual Activities of Daily Living
Adams, Richard J.; Lichter, Matthew D.; Krepkovich, Eileen T.; Ellington, Allison; White, Marga; Diamond, Paul T.
2015-01-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An Unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user’s avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman’s rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs. PMID:25265612
Assessing upper extremity motor function in practice of virtual activities of daily living.
Adams, Richard J; Lichter, Matthew D; Krepkovich, Eileen T; Ellington, Allison; White, Marga; Diamond, Paul T
2015-03-01
A study was conducted to investigate the criterion validity of measures of upper extremity (UE) motor function derived during practice of virtual activities of daily living (ADLs). Fourteen hemiparetic stroke patients employed a Virtual Occupational Therapy Assistant (VOTA), consisting of a high-fidelity virtual world and a Kinect™ sensor, in four sessions of approximately one hour in duration. An unscented Kalman Filter-based human motion tracking algorithm estimated UE joint kinematics in real-time during performance of virtual ADL activities, enabling both animation of the user's avatar and automated generation of metrics related to speed and smoothness of motion. These metrics, aggregated over discrete sub-task elements during performance of virtual ADLs, were compared to scores from an established assessment of UE motor performance, the Wolf Motor Function Test (WMFT). Spearman's rank correlation analysis indicates a moderate correlation between VOTA-derived metrics and the time-based WMFT assessments, supporting the criterion validity of VOTA measures as a means of tracking patient progress during an UE rehabilitation program that includes practice of virtual ADLs.
Adaptive tracking control for a class of stochastic switched systems
NASA Astrophysics Data System (ADS)
Zhang, Hui; Xia, Yuanqing
2018-02-01
The problem of adaptive tracking is considered for a class of stochastic switched systems, in this paper. As preliminaries, the criterion of global asymptotical practical stability in probability is first presented by the aid of common Lyapunov function method. Based on the Lyapunov stability criterion, adaptive backstepping controllers are designed to guarantee that the closed-loop system has a unique global solution, which is globally asymptotically practically stable in probability, and the tracking error in the fourth moment converges to an arbitrarily small neighbourhood of zero. Simulation examples are given to demonstrate the efficiency of the proposed schemes.
Lemly, A Dennis; Skorupa, Joseph P
2007-10-01
The US Environmental Protection Agency is developing a national water quality criterion for selenium that is based on concentrations of the element in fish tissue. Although this approach offers advantages over the current water-based regulations, it also presents new challenges with respect to implementation. A comprehensive protocol that answers the "what, where, and when" is essential with the new tissue-based approach in order to ensure proper acquisition of data that apply to the criterion. Dischargers will need to understand selenium transport, cycling, and bioaccumulation in order to effectively monitor for the criterion and, if necessary, develop site-specific standards. This paper discusses 11 key issues that affect the implementation of a tissue-based criterion, ranging from the selection of fish species to the importance of hydrological units in the sampling design. It also outlines a strategy that incorporates both water column and tissue-based approaches. A national generic safety-net water criterion could be combined with a fish tissue-based criterion for site-specific implementation. For the majority of waters nationwide, National Pollution Discharge Elimination System permitting and other activities associated with the Clean Water Act could continue without the increased expense of sampling and interpreting biological materials. Dischargers would do biotic sampling intermittently (not a routine monitoring burden) on fish tissue relative to the fish tissue criterion. Only when the fish tissue criterion is exceeded would a full site-specific analysis including development of intermedia translation factors be necessary.
NASA Astrophysics Data System (ADS)
Wang, Cong; Shang, De-Guang; Wang, Xiao-Wei
2015-02-01
An improved high-cycle multiaxial fatigue criterion based on the critical plane was proposed in this paper. The critical plane was defined as the plane of maximum shear stress (MSS) in the proposed multiaxial fatigue criterion, which is different from the traditional critical plane based on the MSS amplitude. The proposed criterion was extended as a fatigue life prediction model that can be applicable for ductile and brittle materials. The fatigue life prediction model based on the proposed high-cycle multiaxial fatigue criterion was validated with experimental results obtained from the test of 7075-T651 aluminum alloy and some references.
Buckling of Low Arches or Curved Beams of Small Curvature
NASA Technical Reports Server (NTRS)
Fung, Y C; Kaplan, A
1952-01-01
A general solution, based on the classical buckling criterion, is given for the problem of buckling of low arches under a lateral loading acting toward the center of curvature. For a sinusoidal arch under sinusoidal loading, the critical load can be expressed exactly as a simple function of the beam dimension parameters. For other arch shapes and load distributions, approximate values of the critical load can be obtained by summing a few terms of a rapidly converging Fourier series. The effects of initial end thrust and axial and lateral elastic support are discussed. The buckling load based on energy criterion of Karman and Tsien is also calculated. Results for both the classical and the energy criteria are compared with experimental results.
A new self-report inventory of dyslexia for students: criterion and construct validity.
Tamboer, Peter; Vorst, Harrie C M
2015-02-01
The validity of a Dutch self-report inventory of dyslexia was ascertained in two samples of students. Six biographical questions, 20 general language statements and 56 specific language statements were based on dyslexia as a multi-dimensional deficit. Dyslexia and non-dyslexia were assessed with two criteria: identification with test results (Sample 1) and classification using biographical information (both samples). Using discriminant analyses, these criteria were predicted with various groups of statements. All together, 11 discriminant functions were used to estimate classification accuracy of the inventory. In Sample 1, 15 statements predicted the test criterion with classification accuracy of 98%, and 18 statements predicted the biographical criterion with classification accuracy of 97%. In Sample 2, 16 statements predicted the biographical criterion with classification accuracy of 94%. Estimations of positive and negative predictive value were 89% and 99%. Items of various discriminant functions were factor analysed to find characteristic difficulties of students with dyslexia, resulting in a five-factor structure in Sample 1 and a four-factor structure in Sample 2. Answer bias was investigated with measures of internal consistency reliability. Less than 20 self-report items are sufficient to accurately classify students with and without dyslexia. This supports the usefulness of self-assessment of dyslexia as a valid alternative to diagnostic test batteries. Copyright © 2015 John Wiley & Sons, Ltd.
Cook, Karon F; Kallen, Michael A; Bombardier, Charles; Bamer, Alyssa M; Choi, Seung W; Kim, Jiseon; Salem, Rana; Amtmann, Dagmar
2017-01-01
To evaluate whether items of three measures of depressive symptoms function differently in persons with spinal cord injury (SCI) than in persons from a primary care sample. This study was a retrospective analysis of responses to the Patient Health Questionnaire depression scale, the Center for Epidemiological Studies Depression scale, and the National Institutes of Health Patient-Reported Outcomes Measurement Information System (PROMIS ® ) version 1.0 eight-item depression short form 8b (PROMIS-D). The presence of differential item function (DIF) was evaluated using ordinal logistic regression. No items of any of the three target measures were flagged for DIF based on standard criteria. In a follow-up sensitivity analyses, the criterion was changed to make the analysis more sensitive to potential DIF. Scores were corrected for DIF flagged under this criterion. Minimal differences were found between the original scores and those corrected for DIF under the sensitivity criterion. The three depression screening measures evaluated in this study did not perform differently in samples of individuals with SCI compared to general and community samples. Transdiagnostic symptoms did not appear to spuriously inflate depression severity estimates when administered to people with SCI.
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
NASA Astrophysics Data System (ADS)
Chen, H. C.; Lai, S. K.
1992-03-01
The role of the Percus-Yevick hard-sphere bridge function in the modified hypernetted-chain integral equation is examined within the context of Lado's criterion [F. Lado, S. M. Foiles, and N. W. Ashcroft, Phys. Rev. A 28, 2374 (1983)]. It is found that the commonly used Lado's criterion, which takes advantage of the analytical simplicity of the Percus-Yevick hard-sphere bridge function, is inadequate for determining an accurate static pair-correlation function. Following Rosenfeld [Y. Rosenfeld, Phys. Rev. A 29, 2877 (1984)], we reconsider Lado's criterion in the so-called variational modified hypernetted-chain theory. The main idea is to construct a free-energy functional satisfying the virial-energy thermodynamic self-consistency. It turns out that the widely used Gibbs-Bogoliubov inequality is equivalent to this integral approach of Lado's criterion. Detailed comparison between the presently obtained structural and thermodynamic quantities for liquid alkali metals and those calculated also in the modified hypernetted-chain theory but with the one-component-plasma reference system leads us to a better understanding of the universality property of the bridge function.
Discrete-Time Deterministic $Q$ -Learning: A Novel Convergence Analysis.
Wei, Qinglai; Lewis, Frank L; Sun, Qiuye; Yan, Pengfei; Song, Ruizhuo
2017-05-01
In this paper, a novel discrete-time deterministic Q -learning algorithm is developed. In each iteration of the developed Q -learning algorithm, the iterative Q function is updated for all the state and control spaces, instead of updating for a single state and a single control in traditional Q -learning algorithm. A new convergence criterion is established to guarantee that the iterative Q function converges to the optimum, where the convergence criterion of the learning rates for traditional Q -learning algorithms is simplified. During the convergence analysis, the upper and lower bounds of the iterative Q function are analyzed to obtain the convergence criterion, instead of analyzing the iterative Q function itself. For convenience of analysis, the convergence properties for undiscounted case of the deterministic Q -learning algorithm are first developed. Then, considering the discounted factor, the convergence criterion for the discounted case is established. Neural networks are used to approximate the iterative Q function and compute the iterative control law, respectively, for facilitating the implementation of the deterministic Q -learning algorithm. Finally, simulation results and comparisons are given to illustrate the performance of the developed algorithm.
Prediction of Hot Tearing Using a Dimensionless Niyama Criterion
NASA Astrophysics Data System (ADS)
Monroe, Charles; Beckermann, Christoph
2014-08-01
The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.
Inference of gene regulatory networks from time series by Tsallis entropy
2011-01-01
Background The inference of gene regulatory networks (GRNs) from large-scale expression profiles is one of the most challenging problems of Systems Biology nowadays. Many techniques and models have been proposed for this task. However, it is not generally possible to recover the original topology with great accuracy, mainly due to the short time series data in face of the high complexity of the networks and the intrinsic noise of the expression measurements. In order to improve the accuracy of GRNs inference methods based on entropy (mutual information), a new criterion function is here proposed. Results In this paper we introduce the use of generalized entropy proposed by Tsallis, for the inference of GRNs from time series expression profiles. The inference process is based on a feature selection approach and the conditional entropy is applied as criterion function. In order to assess the proposed methodology, the algorithm is applied to recover the network topology from temporal expressions generated by an artificial gene network (AGN) model as well as from the DREAM challenge. The adopted AGN is based on theoretical models of complex networks and its gene transference function is obtained from random drawing on the set of possible Boolean functions, thus creating its dynamics. On the other hand, DREAM time series data presents variation of network size and its topologies are based on real networks. The dynamics are generated by continuous differential equations with noise and perturbation. By adopting both data sources, it is possible to estimate the average quality of the inference with respect to different network topologies, transfer functions and network sizes. Conclusions A remarkable improvement of accuracy was observed in the experimental results by reducing the number of false connections in the inferred topology by the non-Shannon entropy. The obtained best free parameter of the Tsallis entropy was on average in the range 2.5 ≤ q ≤ 3.5 (hence, subextensive entropy), which opens new perspectives for GRNs inference methods based on information theory and for investigation of the nonextensivity of such networks. The inference algorithm and criterion function proposed here were implemented and included in the DimReduction software, which is freely available at http://sourceforge.net/projects/dimreduction and http://code.google.com/p/dimreduction/. PMID:21545720
Analysis of augmented aircraft flying qualities through application of the Neal-Smith criterion
NASA Technical Reports Server (NTRS)
Bailey, R. E.; Smith, R. E.
1981-01-01
The Neal-Smith criterion is examined for possible applications in the evaluation of augmented fighter aircraft flying qualities. Longitudinal and lateral flying qualities are addressed. Based on the application of several longitudinal flying qualities data bases, revisions are proposed to the original criterion. Examples are given which show the revised criterion to be a good discriminator of pitch flying qualities. Initial results of lateral flying qualities evaluation through application of the Neal-Smith criterion are poor. Lateral aircraft configurations whose flying qualities are degraded by roll ratcheting effects map into the Level 1 region of the criterion. A third dimension of the criterion for flying qualities specification is evident. Additional criteria are proposed to incorporate this dimension into the criterion structure for flying qualities analysis.
Continual Response Measurement: Design and Validation.
ERIC Educational Resources Information Center
Baggaley, Jon
1987-01-01
Discusses reliability and validity of continual response measurement (CRM), a computer-based measurement technique, and its use in social science research. Highlights include the importance of criterion-referencing the data, guidelines for designing studies using CRM, examples typifying their deductive and inductive functions, and a discussion of…
A reliability and mass perspective of SP-100 Stirling cycle lunar-base powerplant designs
NASA Technical Reports Server (NTRS)
Bloomfield, Harvey S.
1991-01-01
The purpose was to obtain reliability and mass perspectives on selection of space power system conceptual designs based on SP-100 reactor and Stirling cycle power-generation subsystems. The approach taken was to: (1) develop a criterion for an acceptable overall reliability risk as a function of the expected range of emerging technology subsystem unit reliabilities; (2) conduct reliability and mass analyses for a diverse matrix of 800-kWe lunar-base design configurations employing single and multiple powerplants with both full and partial subsystem redundancy combinations; and (3) derive reliability and mass perspectives on selection of conceptual design configurations that meet an acceptable reliability criterion with the minimum system mass increase relative to reference powerplant design. The developed perspectives provided valuable insight into the considerations required to identify and characterize high-reliability and low-mass lunar-base powerplant conceptual design.
A Sparse Bayesian Approach for Forward-Looking Superresolution Radar Imaging
Zhang, Yin; Zhang, Yongchao; Huang, Yulin; Yang, Jianyu
2017-01-01
This paper presents a sparse superresolution approach for high cross-range resolution imaging of forward-looking scanning radar based on the Bayesian criterion. First, a novel forward-looking signal model is established as the product of the measurement matrix and the cross-range target distribution, which is more accurate than the conventional convolution model. Then, based on the Bayesian criterion, the widely-used sparse regularization is considered as the penalty term to recover the target distribution. The derivation of the cost function is described, and finally, an iterative expression for minimizing this function is presented. Alternatively, this paper discusses how to estimate the single parameter of Gaussian noise. With the advantage of a more accurate model, the proposed sparse Bayesian approach enjoys a lower model error. Meanwhile, when compared with the conventional superresolution methods, the proposed approach shows high cross-range resolution and small location error. The superresolution results for the simulated point target, scene data, and real measured data are presented to demonstrate the superior performance of the proposed approach. PMID:28604583
New true-triaxial rock strength criteria considering intrinsic material characteristics
NASA Astrophysics Data System (ADS)
Zhang, Qiang; Li, Cheng; Quan, Xiaowei; Wang, Yanning; Yu, Liyuan; Jiang, Binsong
2018-02-01
A reasonable strength criterion should reflect the hydrostatic pressure effect, minimum principal stress effect, and intermediate principal stress effect. The former two effects can be described by the meridian curves, and the last one mainly depends on the Lode angle dependence function. Among three conventional strength criteria, i.e. Mohr-Coulomb (MC), Hoek-Brown (HB), and Exponent (EP) criteria, the difference between generalized compression and extension strength of EP criterion experience a firstly increase then decrease process, and tends to be zero when hydrostatic pressure is big enough. This is in accordance with intrinsic rock strength characterization. Moreover, the critical hydrostatic pressure I_c corresponding to the maximum difference of between generalized compression and extension strength can be easily adjusted by minimum principal stress influence parameter K. So, the exponent function is a more reasonable meridian curves, which well reflects the hydrostatic pressure effect and is employed to describe the generalized compression and extension strength. Meanwhile, three Lode angle dependence functions of L_{{MN}}, L_{{WW}}, and L_{{YMH}}, which unconditionally satisfy the convexity and differential requirements, are employed to represent the intermediate principal stress effect. Realizing the actual strength surface should be located between the generalized compression and extension surface, new true-triaxial criteria are proposed by combining the two states of EP criterion by Lode angle dependence function with a same lode angle. The proposed new true-triaxial criteria have the same strength parameters as EP criterion. Finally, 14 groups of triaxial test data are employed to validate the proposed criteria. The results show that the three new true-triaxial exponent criteria, especially the Exponent Willam-Warnke criterion (EPWW) criterion, give much lower misfits, which illustrates that the EP criterion and L_{{WW}} have more reasonable meridian and deviatoric function form, respectively. The proposed new true-triaxial strength criteria can provide theoretical foundation for stability analysis and optimization of support design of rock engineering.
Zimmermann, Johannes; Böhnke, Jan R; Eschstruth, Rhea; Mathews, Alessa; Wenzel, Kristin; Leising, Daniel
2015-08-01
The alternative model for the classification of personality disorders (PD) in the Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5) Section III comprises 2 major components: impairments in personality functioning (Criterion A) and maladaptive personality traits (Criterion B). In this study, we investigated the latent structure of Criterion A (a) within subdomains, (b) across subdomains, and (c) in conjunction with the Criterion B trait facets. Data were gathered as part of an online study that collected other-ratings by 515 laypersons and 145 therapists. Laypersons were asked to assess 1 of their personal acquaintances, whereas therapists were asked to assess 1 of their patients, using 135 items that captured features of Criteria A and B. We were able to show that (a) the structure within the Criterion A subdomains can be appropriately modeled using generalized graded unfolding models, with results suggesting that the items are indeed related to common underlying constructs but often deviate from their theoretically expected severity level; (b) the structure across subdomains is broadly in line with a model comprising 2 strongly correlated factors of self- and interpersonal functioning, with some notable deviations from the theoretical model; and (c) the joint structure of the Criterion A subdomains and the Criterion B facets broadly resembles the expected model of 2 plus 5 factors, albeit the loading pattern suggests that the distinction between Criteria A and B is somewhat blurry. Our findings provide support for several major assumptions of the alternative DSM-5 model for PD but also highlight aspects of the model that need to be further refined. (c) 2015 APA, all rights reserved).
Closed-loop carrier phase synchronization techniques motivated by likelihood functions
NASA Technical Reports Server (NTRS)
Tsou, H.; Hinedi, S.; Simon, M.
1994-01-01
This article reexamines the notion of closed-loop carrier phase synchronization motivated by the theory of maximum a posteriori phase estimation with emphasis on the development of new structures based on both maximum-likelihood and average-likelihood functions. The criterion of performance used for comparison of all the closed-loop structures discussed is the mean-squared phase error for a fixed-loop bandwidth.
Older Adults' Online Dating Profiles and Successful Aging.
Wada, Mineko; Mortenson, William Bennett; Hurd Clarke, Laura
2016-12-01
This study examined how relevant Rowe and Kahn's three criteria of successful aging were to older adults' self-portrayals in online dating profiles: low probability of disease and disability, high functioning, and active life engagement. In this cross-sectional study, 320 online dating profiles of older adults were randomly selected and coded based on the criteria. Logistic regression analyses determined whether age, gender, and race/ethnicity predicted self-presentation. Few profiles were indicative of successful aging due to the low prevalence of the first two criteria; the third criterion, however, was identified in many profiles. Native Americans were significantly less likely than other ethnic groups to highlight the first two criteria. Younger age predicted presenting the first criterion. Women's presentation of the third criterion remained significantly high with age. The findings suggest that the criteria may be unimportant to older adults when seeking partners, or they may reflect the exclusivity of this construct.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
Saha, Tulshi D; Chou, S Patricia; Grant, Bridget F
2006-07-01
Item response theory (IRT) was used to determine whether the DSM-IV diagnostic criteria for alcohol abuse and dependence are arrayed along a continuum of severity. Data came from a large nationally representative sample of the US population, 18 years and older. A two-parameter logistic IRT model was used to determine the severity and discrimination of each DSM-IV criterion. Differential criterion functioning (DCF) was also assessed across subgroups of the population defined by sex, age and race-ethnicity. All DSM-IV alcohol abuse and dependence criteria, except alcohol-related legal problems, formed a continuum of alcohol use disorder severity. Abuse and dependence criteria did not consistently tap the mildest or more severe end of the continuum respectively, and several criteria were identified as potentially redundant. The drinking in larger amounts or for longer than intended dependence criterion had the greatest discrimination and lowest severity than any other criterion. Although several criteria were found to function differentially between subgroups defined in terms of sex and age, there was evidence that the generalizability and validity of the criterion forming the continuum remained intact at the test score level. DSM-IV diagnostic criteria for alcohol abuse and dependence form a continuum of severity, calling into question the abuse-dependence distinction in the DSM-IV and the interpretation of abuse as a milder disorder than dependence. The criteria tapped the more severe end of the alcohol use disorder continuum, highlighting the need to identify other criteria capturing the mild to intermediate range of the severity. The drinking larger amounts or longer than intended dependence criterion may be a bridging criterion between drinking patterns that incur risk of alcohol use disorder at the milder end of the continuum, with tolerance, withdrawal, impaired control and serious social and occupational dysfunction at the more severe end of the alcohol use disorder continuum. Future IRT and other dimensional analyses hold great promise in informing revisions to categorical classifications and constructing new dimensional classifications of alcohol use disorders based on the DSM and the ICD.
Development of a Methodology for the Derivation of Aquatic Plant Water Quality Criteria
Aquatic plants form the base of most aquatic food chains, comprise biodiversity-building habitats and are functionally important in carbon assimilation and oxygen evolution. The USEPA, as stated in the Clean Water Act, establishes criterion values for various pollutants found in ...
Dilatancy Criteria for Salt Cavern Design: A Comparison Between Stress- and Strain-Based Approaches
NASA Astrophysics Data System (ADS)
Labaune, P.; Rouabhi, A.; Tijani, M.; Blanco-Martín, L.; You, T.
2018-02-01
This paper presents a new approach for salt cavern design, based on the use of the onset of dilatancy as a design threshold. In the proposed approach, a rheological model that includes dilatancy at the constitutive level is developed, and a strain-based dilatancy criterion is defined. As compared to classical design methods that consist in simulating cavern behavior through creep laws (fitted on long-term tests) and then using a criterion (derived from short-terms tests or experience) to determine the stability of the excavation, the proposed approach is consistent both with short- and long-term conditions. The new strain-based dilatancy criterion is compared to a stress-based dilatancy criterion through numerical simulations of salt caverns under cyclic loading conditions. The dilatancy zones predicted by the strain-based criterion are larger than the ones predicted by the stress-based criteria, which is conservative yet constructive for design purposes.
Earing Prediction in Cup Drawing using the BBC2008 Yield Criterion
NASA Astrophysics Data System (ADS)
Vrh, Marko; Halilovič, Miroslav; Starman, Bojan; Štok, Boris; Comsa, Dan-Sorin; Banabic, Dorel
2011-08-01
The paper deals with constitutive modelling of highly anisotropic sheet metals. It presents FEM based earing predictions in cup drawing simulation of highly anisotropic aluminium alloys where more than four ears occur. For that purpose the BBC2008 yield criterion, which is a plane-stress yield criterion formulated in the form of a finite series, is used. Thus defined criterion can be expanded to retain more or less terms, depending on the amount of given experimental data. In order to use the model in sheet metal forming simulations we have implemented it in a general purpose finite element code ABAQUS/Explicit via VUMAT subroutine, considering alternatively eight or sixteen parameters (8p and 16p version). For the integration of the constitutive model the explicit NICE (Next Increment Corrects Error) integration scheme has been used. Due to the scheme effectiveness the CPU time consumption for a simulation is comparable to the time consumption of built-in constitutive models. Two aluminium alloys, namely AA5042-H2 and AA2090-T3, have been used for a validation of the model. For both alloys the parameters of the BBC2008 model have been identified with a developed numerical procedure, based on a minimization of the developed cost function. For both materials, the predictions of the BBC2008 model prove to be in very good agreement with the experimental results. The flexibility and the accuracy of the model together with the identification and integration procedure guarantee the applicability of the BBC2008 yield criterion in industrial applications.
Chen, Poyu; Lin, Keh-Chung; Liing, Rong-Jiuan; Wu, Ching-Yi; Chen, Chia-Ling; Chang, Ku-Chou
2016-06-01
To examine the criterion validity, responsiveness, and minimal clinically important difference (MCID) of the EuroQoL 5-Dimensions Questionnaire (EQ-5D-5L) and visual analog scale (EQ-VAS) in people receiving rehabilitation after stroke. The EQ-5D-5L, along with four criterion measures-the Medical Research Council scales for muscle strength, the Fugl-Meyer assessment, the functional independence measure, and the Stroke Impact Scale-was administered to 65 patients with stroke before and after 3- to 4-week therapy. Criterion validity was estimated using the Spearman correlation coefficient. Responsiveness was analyzed by the effect size, standardized response mean (SRM), and criterion responsiveness. The MCID was determined by anchor-based and distribution-based approaches. The percentage of patients exceeding the MCID was also reported. Concurrent validity of the EQ-Index was better compared with the EQ-VAS. The EQ-Index has better power for predicting the rehabilitation outcome in the activities of daily living than other motor-related outcome measures. The EQ-Index was moderately responsive to change (SRM = 0.63), whereas the EQ-VAS was only mildly responsive to change. The MCID estimation of the EQ-Index (the percentage of patients exceeding the MCID) was 0.10 (33.8 %) and 0.10 (33.8 %) based on the anchor-based and distribution-based approaches, respectively, and the estimation of EQ-VAS was 8.61 (41.5 %) and 10.82 (32.3 %). The EQ-Index has shown reasonable concurrent validity, limited predictive validity, and acceptable responsiveness for detecting the health-related quality of life in stroke patients undergoing rehabilitation, but not for EQ-VAS. Future research considering different recovery stages after stroke is warranted to validate these estimations.
Using the brain criterion in organ donation after the circulatory determination of death.
Dalle Ave, Anne L; Bernat, James L
2016-06-01
The UK, France, and Switzerland determine death using the brain criterion even in organ donation after the circulatory determination of death (DCDD), in which the United States and Canada use the circulatory-respiratory criterion. In our analysis of the scientific validity of the brain criterion in DCDD, we concluded that although it may be attractive in theory because it conceptualizes death as a unitary phenomenon, its use in practice is invalid. The preconditions (ie, the absence of reversible causes, such as toxic or metabolic disorders) for determining brain death cannot be met in DCDD. Thus, although brain death tests prove the cessation of tested brain functions, they do not prove that their cessation is irreversible. A stand-off period of 5 to 10 minutes is insufficient to achieve the irreversibility requirement of brain death. Because circulatory cessation inevitably leads to cessation of brain functions, first permanently and then irreversibly, the use of brain criterion is unnecessary to determine death in DCDD. Expanding brain death to permit it to be satisfied by permanent cessation of brain functions is controversial but has been considered as a possible means to declare death in uncontrolled DCDD. Copyright © 2016 Elsevier Inc. All rights reserved.
Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion
NASA Astrophysics Data System (ADS)
Tsankov, Tsanko V.; Czarnetzki, Uwe
2017-05-01
Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.
ERIC Educational Resources Information Center
Lee, HyeSun; Geisinger, Kurt F.
2016-01-01
The current study investigated the impact of matching criterion purification on the accuracy of differential item functioning (DIF) detection in large-scale assessments. The three matching approaches for DIF analyses (block-level matching, pooled booklet matching, and equated pooled booklet matching) were employed with the Mantel-Haenszel…
A Guide for Respiratory Therapy Curriculum Design.
ERIC Educational Resources Information Center
American Association for Respiratory Therapy, Dallas, TX.
The document presents educational criterion upon which curriculum builders can create a competency-based program of respiratory therapy education. The 11 modules presented supplement and compliment the document Delineation of Roles and Functions of Respiratory Therapy Personnel (CE 005 945) which is listed as appendix D but not included as such.…
On Measuring Quantitative Interpretations of Reasonable Doubt
ERIC Educational Resources Information Center
Dhami, Mandeep K.
2008-01-01
Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…
NASA Astrophysics Data System (ADS)
Rosenfeld, Yaakov
1984-05-01
Featuring the modified hypernetted-chain (MHNC) scheme as a variational fitting procedure, we demonstrate that the accuracy of the variational perturbation theory (VPT) and of the method based on additivity of equations of state is determined by the excess entropy dependence of the bridge-function parameters [i.e., η(s) when the Percus-Yevick hard-sphere bridge functions are employed]. It is found that η(s) is nearly universal for all soft (i.e., "physical") potentials while it is distinctly different for the hard spheres, providing a graphical display of the "jump" in pair-potential space (with respect to accuracy of VPT) from "hard" to "soft" behavior. The universality of η(s) provides a local criterion for the MHNC scheme that should be useful for inverting structure-factor data in order to obtain the potential. An alternative local MHNC criterion due to Lado is rederived and extended, and it is also analyzed in light of the plot of η(s).
Cui, Wenchao; Wang, Yi; Lei, Tao; Fan, Yangyu; Feng, Yan
2013-01-01
This paper presents a variational level set method for simultaneous segmentation and bias field estimation of medical images with intensity inhomogeneity. In our model, the statistics of image intensities belonging to each different tissue in local regions are characterized by Gaussian distributions with different means and variances. According to maximum a posteriori probability (MAP) and Bayes' rule, we first derive a local objective function for image intensities in a neighborhood around each pixel. Then this local objective function is integrated with respect to the neighborhood center over the entire image domain to give a global criterion. In level set framework, this global criterion defines an energy in terms of the level set functions that represent a partition of the image domain and a bias field that accounts for the intensity inhomogeneity of the image. Therefore, image segmentation and bias field estimation are simultaneously achieved via a level set evolution process. Experimental results for synthetic and real images show desirable performances of our method.
2015-01-01
Background Cellular processes are known to be modular and are realized by groups of proteins implicated in common biological functions. Such groups of proteins are called functional modules, and many community detection methods have been devised for their discovery from protein interaction networks (PINs) data. In current agglomerative clustering approaches, vertices with just a very few neighbors are often classified as separate clusters, which does not make sense biologically. Also, a major limitation of agglomerative techniques is that their computational efficiency do not scale well to large PINs. Finally, PIN data obtained from large scale experiments generally contain many false positives, and this makes it hard for agglomerative clustering methods to find the correct clusters, since they are known to be sensitive to noisy data. Results We propose a local similarity premetric, the relative vertex clustering value, as a new criterion allowing to decide when a node can be added to a given node's cluster and which addresses the above three issues. Based on this criterion, we introduce a novel and very fast agglomerative clustering technique, FAC-PIN, for discovering functional modules and protein complexes from a PIN data. Conclusions Our proposed FAC-PIN algorithm is applied to nine PIN data from eight different species including the yeast PIN, and the identified functional modules are validated using Gene Ontology (GO) annotations from DAVID Bioinformatics Resources. Identified protein complexes are also validated using experimentally verified complexes. Computational results show that FAC-PIN can discover functional modules or protein complexes from PINs more accurately and more efficiently than HC-PIN and CNM, the current state-of-the-art approaches for clustering PINs in an agglomerative manner. PMID:25734691
ERIC Educational Resources Information Center
Smith, Ronald E.; And Others
1976-01-01
Subjects (N=80) made expectancy of success statements in a dart throwing task under two conditions. Significant differences between criterion groups were obtained, with success statements remaining constant across difficulty levels in the relative criterion condition while declining rapidly as a function of task difficulty in the absolute…
Rough Set Based Splitting Criterion for Binary Decision Tree Classifiers
2006-09-26
Alata O. Fernandez-Maloigne C., and Ferrie J.C. (2001). Unsupervised Algorithm for the Segmentation of Three-Dimensional Magnetic Resonance Brain ...instinctual and learned responses in the brain , causing it to make decisions based on patterns in the stimuli. Using this deceptively simple process...2001. [2] Bohn C. (1997). An Incremental Unsupervised Learning Scheme for Function Approximation. In: Proceedings of the 1997 IEEE International
A multiple maximum scatter difference discriminant criterion for facial feature extraction.
Song, Fengxi; Zhang, David; Mei, Dayong; Guo, Zhongwei
2007-12-01
Maximum scatter difference (MSD) discriminant criterion was a recently presented binary discriminant criterion for pattern classification that utilizes the generalized scatter difference rather than the generalized Rayleigh quotient as a class separability measure, thereby avoiding the singularity problem when addressing small-sample-size problems. MSD classifiers based on this criterion have been quite effective on face-recognition tasks, but as they are binary classifiers, they are not as efficient on large-scale classification tasks. To address the problem, this paper generalizes the classification-oriented binary criterion to its multiple counterpart--multiple MSD (MMSD) discriminant criterion for facial feature extraction. The MMSD feature-extraction method, which is based on this novel discriminant criterion, is a new subspace-based feature-extraction method. Unlike most other subspace-based feature-extraction methods, the MMSD computes its discriminant vectors from both the range of the between-class scatter matrix and the null space of the within-class scatter matrix. The MMSD is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the benchmark database, FERET, show that the MMSD out-performs state-of-the-art facial feature-extraction methods such as null space method, direct linear discriminant analysis (LDA), eigenface, Fisherface, and complete LDA.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Lawson criterion in cyclotron heating
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demutskii, V.P.; Polovin, R.V.
1975-07-01
Stochastic heating of plasma particles is of great interest for controlled thermonuclear reactions. The ion velocity distribution function is described for the case of cyclotron heating. The Lawson criterion applied to this distribution is described. (MOW)
Variations in Criterion A and PTSD Rates in a Community Sample of Women
Anders, Samantha; Frazier, Patricia; Frankfurt, Sheila
2010-01-01
We assessed PTSD prevalence and symptoms as a function of whether participants’ worst lifetime event met Criterion A1 for PTSD (DSM-IV-TR; APA, 2000) and whether the event was directly or indirectly experienced in a community sample of adult women (N = 884). Exposure to both non-Criterion A1 and Criterion A1 events was systematically assessed. PTSD was assessed with regard to participants’ self-nominated worst event using the PTSD module of the SCID-I/NP (First, Spitzer, Gibbon, & Williams, 1997). There were no differences in PTSD prevalence rates between Criterion A1 and non-A1 events; however, directly-experienced worst events were significantly more likely to meet PTSD criteria than were indirectly-experienced worst events. Non-Criterion A1 and directly-experienced worst events were associated with significantly more PTSD symptoms than were Criterion A1 or indirectly-experienced events, respectively. Criterion A2 (experiencing fear, helplessness, or horror) had little effect on PTSD rates. PMID:20888184
Criterion-Referenced and Norm-Referenced Assessments: Compatibility and Complementarity
ERIC Educational Resources Information Center
Lok, Beatrice; McNaught, Carmel; Young, Kenneth
2016-01-01
The tension between criterion-referenced and norm-referenced assessment is examined in the context of curriculum planning and assessment in outcomes-based approaches to higher education. This paper argues the importance of a criterion-referenced assessment approach once an outcomes-based approach has been adopted. It further discusses the…
Application of Single Crystal Failure Criteria: Theory and Turbine Blade Case Study
NASA Technical Reports Server (NTRS)
Sayyah, Tarek; Swanson, Gregory R.; Schonberg, W. P.
1999-01-01
The orientation of the single crystal material within a structural component is known to affect the strength and life of the part. The first stage blade of the High Pressure Fuel Turbopump (HPFTP)/ Alternative Turbopump Development (ATD), of the Space Shuttle Main Engine (SSME) was used to study the effects of secondary axis'orientation angles on the failure rate of the blade. A new failure criterion was developed based on normal and shear strains on the primary crystallographic planes. The criterion was verified using low cycle fatigue (LCF) specimen data and a finite element model of the test specimens. The criterion was then used to study ATD/HPFTP first stage blade failure events. A detailed ANSYS finite element model of the blade was used to calculate the failure parameter for the different crystallographic orientations. A total of 297 cases were run to cover a wide range of acceptable orientations within the blade. Those orientations are related to the base crystallographic coordinate system that was created in the ANSYS finite element model. Contour plots of the criterion as a function of orientation for the blade tip and attachment were obtained. Results of the analysis revealed a 40% increase in the failure parameter due to changing of the primary and secondary axes of material orientations. A comparison between failure criterion predictions and actual engine test data was then conducted. The engine test data comes from two ATD/HPFTP builds (units F3- 4B and F6-5D), which were ground tested on the SSME at the Stennis Space Center in Mississippi. Both units experienced cracking of the airfoil tips in multiple blades, but only a few cracks grew all the way across the wall of the hollow core airfoil.
NASA Astrophysics Data System (ADS)
Maitra, Subrata; Banerjee, Debamalya
2010-10-01
Present article is based on application of the product quality and improvement of design related with the nature of failure of machineries and plant operational problems of an industrial blower fan Company. The project aims at developing the product on the basis of standardized production parameters for selling its products in the market. Special attention is also being paid to the blower fans which have been ordered directly by the customer on the basis of installed capacity of air to be provided by the fan. Application of quality function deployment is primarily a customer oriented approach. Proposed model of QFD integrated with AHP to select and rank the decision criterions on the commercial and technical factors and the measurement of the decision parameters for selection of best product in the compettitive environment. The present AHP-QFD model justifies the selection of a blower fan with the help of the group of experts' opinion by pairwise comparison of the customer's and ergonomy based technical design requirements. The steps invoved in implementation of the QFD—AHP and selection of weighted criterion may be helpful for all similar purpose industries maintaining cost and utility for competitive product.
Energy Criterion for the Spectral Stability of Discrete Breathers.
Kevrekidis, Panayotis G; Cuevas-Maraver, Jesús; Pelinovsky, Dmitry E
2016-08-26
Discrete breathers are ubiquitous structures in nonlinear anharmonic models ranging from the prototypical example of the Fermi-Pasta-Ulam model to Klein-Gordon nonlinear lattices, among many others. We propose a general criterion for the emergence of instabilities of discrete breathers analogous to the well-established Vakhitov-Kolokolov criterion for solitary waves. The criterion involves the change of monotonicity of the discrete breather's energy as a function of the breather frequency. Our analysis suggests and numerical results corroborate that breathers with increasing (decreasing) energy-frequency dependence are generically unstable in soft (hard) nonlinear potentials.
ERIC Educational Resources Information Center
Duggleby, Sandra J.; Tang, Wei; Kuo-Newhouse, Amy
2016-01-01
This study examined the relationship between ninth-grade students' use of connectives (temporal, causal, adversative, and additive) in functional writing and performance on standards-based/criterion-referenced measures of reading and writing. Specifically, structural equation modeling (SEM) techniques were used to examine the relationship between…
Symptoms versus Impairment: The Case for Respecting "DSM-IV"'s Criterion D
ERIC Educational Resources Information Center
Gordon, Michael; Antshel, Kevin; Faraone, Stephen; Barkley, Russell; Lewandowski, Larry; Hudziak, James J.; Biederman, Joseph; Cunningham, Charles
2006-01-01
Diagnosing ADHD based primarily on symptom reports assumes that the number/frequency of symptoms is tied closely to the impairment imposed on an individual's functioning. That presumed linkage encourages diagnosis more by "Diagnostic and Statistical Manual of Mental Disorders" (4th ed.) style symptom lists than well-defined,…
Varying the valuating function and the presentable bank in computerized adaptive testing.
Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio
2011-05-01
In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.
NASA Technical Reports Server (NTRS)
Hopkins, W. D.; Washburn, D. A.; Hyatt, C. W.; Rumbaugh, D. M. (Principal Investigator)
1996-01-01
This study describes video-task acquisition in two nonhuman primate species. The subjects were seven rhesus monkeys (Macaca mulatta) and seven chimpanzees (Pan troglodytes). All subjects were trained to manipulate a joystick which controlled a cursor displayed on a computer monitor. Two criterion levels were used: one based on conceptual knowledge of the task and one based on motor performance. Chimpanzees and rhesus monkeys attained criterion in a comparable number of trials using a conceptually based criterion. However, using a criterion based on motor performance, chimpanzees reached criterion significantly faster than rhesus monkeys. Analysis of error patterns and latency indicated that the rhesus monkeys had a larger asymmetry in response bias and were significantly slower in responding than the chimpanzees. The results are discussed in terms of the relation between object manipulation skills and video-task acquisition.
Criterion I: Soil and water conservation on rangelands [Chapter 2
Michael G. (Sherm) Karl; Paul T. Tueller; Gerald E. Schuman; Mark R. Vinson; James L. Fogg; Ronald W. Shafer; David A. Pyke; D. Terrance Booth; Steven J. Borchard; William G. Ypsilantis; Richard H. Barrett
2010-01-01
The Sustainable Rangelands Roundtable (SRR) has explicitly included conservation and maintenance of soil and water resources as a criterion of rangeland sustainability. Within the soil/water criterion, 10 indicators  five soil-based and five water-based - were developed through the expert opinions of rangeland scientists, rangeland management agency personnel, non-...
Water-sediment controversy in setting environmental standards for selenium
Hamilton, Steven J.; Lemly, A. Dennis
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 μg/L. This commentary will present information to support a national water quality criterion for selenium of 2 μg/L, based on a wide array of support from federal, state, university, and international sources. Recently, two articles have argued for a sediment-based criterion and presented a model for deriving site-specific criteria. In one example, they calculate a criterion of 31 μg/L for a stream with a low sediment selenium toxicity threshold and low site-specific sediment total organic carbon content, which is substantially higher than the national criterion of 5 μg/L. Their basic premise for proposing a sediment-based method has been critically reviewed and problems in their approach are discussed.
Human striatal activation during adjustment of the response criterion in visual word recognition.
Kuchinke, Lars; Hofmann, Markus J; Jacobs, Arthur M; Frühholz, Sascha; Tamm, Sascha; Herrmann, Manfred
2011-02-01
Results of recent computational modelling studies suggest that a general function of the striatum in human cognition is related to shifting decision criteria in selection processes. We used functional magnetic resonance imaging (fMRI) in 21 healthy subjects to examine the hemodynamic responses when subjects shift their response criterion on a trial-by-trial basis in the lexical decision paradigm. Trial-by-trial criterion setting is obtained when subjects respond faster in trials following a word trial than in trials following nonword trials - irrespective of the lexicality of the current trial. Since selection demands are equally high in the current trials, we expected to observe neural activations that are related to response criterion shifting. The behavioural data show sequential effects with faster responses in trials following word trials compared to trials following nonword trials, suggesting that subjects shifted their response criterion on a trial-by-trial basis. The neural responses revealed a signal increase in the striatum only in trials following word trials. This striatal activation is therefore likely to be related to response criterion setting. It demonstrates a role of the striatum in shifting decision criteria in visual word recognition, which cannot be attributed to pure error-related processing or the selection of a preferred response. Copyright © 2010 Elsevier Inc. All rights reserved.
Optimal sensor placement for spatial lattice structure based on genetic algorithms
NASA Astrophysics Data System (ADS)
Liu, Wei; Gao, Wei-cheng; Sun, Yi; Xu, Min-jian
2008-10-01
Optimal sensor placement technique plays a key role in structural health monitoring of spatial lattice structures. This paper considers the problem of locating sensors on a spatial lattice structure with the aim of maximizing the data information so that structural dynamic behavior can be fully characterized. Based on the criterion of optimal sensor placement for modal test, an improved genetic algorithm is introduced to find the optimal placement of sensors. The modal strain energy (MSE) and the modal assurance criterion (MAC) have been taken as the fitness function, respectively, so that three placement designs were produced. The decimal two-dimension array coding method instead of binary coding method is proposed to code the solution. Forced mutation operator is introduced when the identical genes appear via the crossover procedure. A computational simulation of a 12-bay plain truss model has been implemented to demonstrate the feasibility of the three optimal algorithms above. The obtained optimal sensor placements using the improved genetic algorithm are compared with those gained by exiting genetic algorithm using the binary coding method. Further the comparison criterion based on the mean square error between the finite element method (FEM) mode shapes and the Guyan expansion mode shapes identified by data-driven stochastic subspace identification (SSI-DATA) method are employed to demonstrate the advantage of the different fitness function. The results showed that some innovations in genetic algorithm proposed in this paper can enlarge the genes storage and improve the convergence of the algorithm. More importantly, the three optimal sensor placement methods can all provide the reliable results and identify the vibration characteristics of the 12-bay plain truss model accurately.
ERIC Educational Resources Information Center
Willoughby, Michael T.; Blair, Clancy B.; Wirth, R. J.; Greenberg, Mark
2010-01-01
In this study, the authors examined the psychometric properties and criterion validity of a newly developed battery of tasks that were designed to assess executive function (EF) abilities in early childhood. The battery was included in the 36-month assessment of the Family Life Project (FLP), a prospective longitudinal study of 1,292 children…
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Energy-based fatigue model for shape memory alloys including thermomechanical coupling
NASA Astrophysics Data System (ADS)
Zhang, Yahui; Zhu, Jihong; Moumni, Ziad; Van Herpen, Alain; Zhang, Weihong
2016-03-01
This paper is aimed at developing a low cycle fatigue criterion for pseudoelastic shape memory alloys to take into account thermomechanical coupling. To this end, fatigue tests are carried out at different loading rates under strain control at room temperature using NiTi wires. Temperature distribution on the specimen is measured using a high speed thermal camera. Specimens are tested to failure and fatigue lifetimes of specimens are measured. Test results show that the fatigue lifetime is greatly influenced by the loading rate: as the strain rate increases, the fatigue lifetime decreases. Furthermore, it is shown that the fatigue cracks initiate when the stored energy inside the material reaches a critical value. An energy-based fatigue criterion is thus proposed as a function of the irreversible hysteresis energy of the stabilized cycle and the loading rate. Fatigue life is calculated using the proposed model. The experimental and computational results compare well.
Industry Software Trustworthiness Criterion Research Based on Business Trustworthiness
NASA Astrophysics Data System (ADS)
Zhang, Jin; Liu, Jun-fei; Jiao, Hai-xing; Shen, Yi; Liu, Shu-yuan
To industry software Trustworthiness problem, an idea aiming to business to construct industry software trustworthiness criterion is proposed. Based on the triangle model of "trustworthy grade definition-trustworthy evidence model-trustworthy evaluating", the idea of business trustworthiness is incarnated from different aspects of trustworthy triangle model for special industry software, power producing management system (PPMS). Business trustworthiness is the center in the constructed industry trustworthy software criterion. Fusing the international standard and industry rules, the constructed trustworthy criterion strengthens the maneuverability and reliability. Quantitive evaluating method makes the evaluating results be intuitionistic and comparable.
Yang, Wen; Zhu, Jin-Yong; Lu, Kai-Hong; Wan, Li; Mao, Xiao-Hua
2014-06-01
Appropriate schemes for classification of freshwater phytoplankton are prerequisites and important tools for revealing phytoplanktonic succession and studying freshwater ecosystems. An alternative approach, functional group of freshwater phytoplankton, has been proposed and developed due to the deficiencies of Linnaean and molecular identification in ecological applications. The functional group of phytoplankton is a classification scheme based on autoecology. In this study, the theoretical basis and classification criterion of functional group (FG), morpho-functional group (MFG) and morphology-based functional group (MBFG) were summarized, as well as their merits and demerits. FG was considered as the optimal classification approach for the aquatic ecology research and aquatic environment evaluation. The application status of FG was introduced, with the evaluation standards and problems of two approaches to assess water quality on the basis of FG, index methods of Q and QR, being briefly discussed.
Rational Approximations with Hankel-Norm Criterion
1980-01-01
REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van
A level set method for cupping artifact correction in cone-beam CT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Xie, Shipeng; Li, Haibo; Ge, Qi
2015-08-15
Purpose: To reduce cupping artifacts and improve the contrast-to-noise ratio in cone-beam computed tomography (CBCT). Methods: A level set method is proposed to reduce cupping artifacts in the reconstructed image of CBCT. The authors derive a local intensity clustering property of the CBCT image and define a local clustering criterion function of the image intensities in a neighborhood of each point. This criterion function defines an energy in terms of the level set functions, which represent a segmentation result and the cupping artifacts. The cupping artifacts are estimated as a result of minimizing this energy. Results: The cupping artifacts inmore » CBCT are reduced by an average of 90%. The results indicate that the level set-based algorithm is practical and effective for reducing the cupping artifacts and preserving the quality of the reconstructed image. Conclusions: The proposed method focuses on the reconstructed image without requiring any additional physical equipment, is easily implemented, and provides cupping correction through a single-scan acquisition. The experimental results demonstrate that the proposed method successfully reduces the cupping artifacts.« less
NASA Astrophysics Data System (ADS)
Kistenev, Yu. V.; Kuzmin, D. A.; Sandykova, E. A.; Shapovalov, A. V.
2015-11-01
An approach to the reduction of the space of the absorption spectra, based on the original criterion for profile analysis of the spectra, was proposed. This criterion dates back to the known statistics chi-square test of Pearson. Introduced criterion allows to quantify the differences of spectral curves.
Matrix cracking in laminated composites under monotonic and cyclic loadings
NASA Technical Reports Server (NTRS)
Allen, David H.; Lee, Jong-Won
1991-01-01
An analytical model based on the internal state variable (ISV) concept and the strain energy method is proposed for characterizing the monotonic and cyclic response of laminated composites containing matrix cracks. A modified constitution is formulated for angle-ply laminates under general in-plane mechanical loading and constant temperature change. A monotonic matrix cracking criterion is developed for predicting the crack density in cross-ply laminates as a function of the applied laminate axial stress. An initial formulation for a cyclic matrix cracking criterion for cross-ply laminates is also discussed. For the monotonic loading case, a number of experimental data and well-known models are compared with the present study for validating the practical applicability of the ISV approach.
A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means.
Polak, Marike; de Rooij, Mark; Heiser, Willem J
2012-09-01
In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) criterion of irrelevance, which is a graphical, exploratory method for evaluating the "relevance" of dichotomous attitude items. We generalized this criterion to graded response items and quantified the relevance by fitting a unimodal smoother. The resulting goodness-of-fit was used to determine item fit and aggregated scale fit. Based on a simulation procedure, cutoff values were proposed for the measures of item fit. These cutoff values showed high power rates and acceptable Type I error rates. We present 2 applications of the OCM method. First, we apply the OCM method to personality data from the Developmental Profile; second, we analyze attitude data collected by Roberts and Laughlin (1996) concerning opinions of capital punishment.
Development and Validation of Triarchic Construct Scales from the Psychopathic Personality Inventory
Hall, Jason R.; Drislane, Laura E.; Patrick, Christopher J.; Morano, Mario; Lilienfeld, Scott O.; Poythress, Norman G.
2014-01-01
The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy—the Psychopathic Personality Inventory (PPI). A consensus rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and non-offender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy. PMID:24447280
Hall, Jason R; Drislane, Laura E; Patrick, Christopher J; Morano, Mario; Lilienfeld, Scott O; Poythress, Norman G
2014-06-01
The Triarchic model of psychopathy describes this complex condition in terms of distinct phenotypic components of boldness, meanness, and disinhibition. Brief self-report scales designed specifically to index these psychopathy facets have thus far demonstrated promising construct validity. The present study sought to develop and validate scales for assessing facets of the Triarchic model using items from a well-validated existing measure of psychopathy-the Psychopathic Personality Inventory (PPI). A consensus-rating approach was used to identify PPI items relevant to each Triarchic facet, and the convergent and discriminant validity of the resulting PPI-based Triarchic scales were evaluated in relation to multiple criterion variables (i.e., other psychopathy inventories, antisocial personality disorder features, personality traits, psychosocial functioning) in offender and nonoffender samples. The PPI-based Triarchic scales showed good internal consistency and related to criterion variables in ways consistent with predictions based on the Triarchic model. Findings are discussed in terms of implications for conceptualization and assessment of psychopathy.
Evaluation of Hierarchical Clustering Algorithms for Document Datasets
2002-06-03
link, complete-link, and group average ( UPGMA )) and a new set of merging criteria derived from the six partitional criterion functions. Overall, we...used the single-link, complete-link, and UPGMA schemes, as well as, the various partitional criterion functions described in Section 3.1. The single-link...other (complete-link approach). The UPGMA scheme [16] (also known as group average) overcomes these problems by measuring the similarity of two clusters
1978-09-01
iE ARI TECHNICAL REPORT S~ TR-78-A31 M CCriterion-Reforencod Loasurement In the Army: Development of a Research-Based, Practical, Test Construction ...conducted to develop a Criterion- 1 Referenced Tests (CRTs) Construction Manual. Major accomplishments were the preparation of a written review of the...survey of the literature on Criterion-Referenced Testing’ conducted in order to provide an information base for development of the CRT Construction
NASA Astrophysics Data System (ADS)
Wu, Xiaojian; Zhou, Bing; Wen, Guilin; Long, Lefei; Cui, Qingjia
2018-04-01
A multi-objective active front steering (AFS) control system considering the road adhesion constraint on vehicle stability is developed using the sliding mode control (SMC) method. First, an identification function combined with the relationship between the yaw rate and the steering angle is developed to determine whether the tyre state is linear or nonlinear. On this basis, an intervention criterion for the AFS system is proposed to improve vehicle handling and stability in emergent conditions. A sideslip angle stability domain enveloped by the upper, lower, left, and right boundaries, as well as the constraint of road adhesion coefficient, is constructed based on the ? phase-plane method. A dynamic weighting coefficient to coordinate the control of yaw rate and sideslip angle, and a control strategy that considers changing control objectives based on the desired yaw rate, the desired sideslip angle, and their proportional weights, are proposed for the SMC controller. Because road adhesion has a significant effect on vehicle stability and to meet the control algorithm's requirement of real-time access to vehicle states, a unscented Kalman filter-based state observer is proposed to estimate the adhesion coefficient and the required states. Finally, simulations are performed using high and low road adhesion conditions in a Matlab/Simulink environment, and the results show that the proposed AFS control system promptly intervenes according to the intervention criterion, effectively improving vehicle handling and stability.
An Exploratory Analysis of Functional Staging Using an Item Response Theory Approach
Tao, Wei; Haley, Stephen M.; Coster, Wendy J.; Ni, Pengsheng; Jette, Alan M.
2009-01-01
Objectives To develop and explore the feasibility of a functional staging system (defined as the process of assigning subjects, according to predetermined standards, into a set of hierarchical levels with regard to their functioning performance in mobility, daily activities, and cognitive skills) based on item response theory (IRT) methods using short-forms of the Activity Measure for Post-Acute Care (AM-PAC); and to compare the criterion validity and sensitivity of the IRT-based staging system to a non-IRT-based staging system developed for the FIM instrument. Design Prospective, longitudinal cohort study of patients interviewed at hospital discharge and 1, 6, and 12 months after inpatient rehabilitation. Setting Follow-up interviews conducted in patients’ homes. Participants Convenience sample of 516 patients (47% men; sample mean age, 68.3y) at baseline (retention at the final follow-up, 65%) with neurologic, lower-extremity orthopedic, or complex medical conditions. Interventions Not applicable Main Outcome Measures AM-PAC basic mobility, daily activity, and applied cognitive activity stages; FIM executive control, mobility, activities of daily living, and sphincter stages. Stages refer to the hierarchical levels assigned to patient’s functioning performance. Results We were able to define IRT-based staging definitions and create meaningful cut scores based on the 3 AM-PAC short-forms. The IRT stages correlated as well or better to the criterion items than the FIM stages. Both the IRT-based stages and the FIM stages were sensitive to changes throughout the 6-month follow-up period. The FIM stages were more sensitive in detecting changes between baseline and 1-month follow-up visit. The AM-PAC stages were more discriminant in the follow-up visits. Conclusions An IRT-based staging approach appeared feasible and effective in classifying patients throughout long-term follow-up. Although these stages were developed from short-forms, this staging methodology could also be applied to improve the meaning of scores generated from IRT-based computerized adaptive testing in future work. PMID:18503798
NASA Astrophysics Data System (ADS)
Jia, Chen; Chen, Yong
2015-05-01
In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann-Schmiedl-Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS.
An Independent and Coordinated Criterion for Kinematic Aircraft Maneuvers
NASA Technical Reports Server (NTRS)
Narkawicz, Anthony J.; Munoz, Cesar A.; Hagen, George
2014-01-01
This paper proposes a mathematical definition of an aircraft-separation criterion for kinematic-based horizontal maneuvers. It has been formally proved that kinematic maneu- vers that satisfy the new criterion are independent and coordinated for repulsiveness, i.e., the distance at closest point of approach increases whether one or both aircraft maneuver according to the criterion. The proposed criterion is currently used in NASA's Airborne Coordinated Resolution and Detection (ACCoRD) set of tools for the design and analysis of separation assurance systems.
Zhong, Shangping; Chen, Tianshun; He, Fengying; Niu, Yuzhen
2014-09-01
For a practical pattern classification task solved by kernel methods, the computing time is mainly spent on kernel learning (or training). However, the current kernel learning approaches are based on local optimization techniques, and hard to have good time performances, especially for large datasets. Thus the existing algorithms cannot be easily extended to large-scale tasks. In this paper, we present a fast Gaussian kernel learning method by solving a specially structured global optimization (SSGO) problem. We optimize the Gaussian kernel function by using the formulated kernel target alignment criterion, which is a difference of increasing (d.i.) functions. Through using a power-transformation based convexification method, the objective criterion can be represented as a difference of convex (d.c.) functions with a fixed power-transformation parameter. And the objective programming problem can then be converted to a SSGO problem: globally minimizing a concave function over a convex set. The SSGO problem is classical and has good solvability. Thus, to find the global optimal solution efficiently, we can adopt the improved Hoffman's outer approximation method, which need not repeat the searching procedure with different starting points to locate the best local minimum. Also, the proposed method can be proven to converge to the global solution for any classification task. We evaluate the proposed method on twenty benchmark datasets, and compare it with four other Gaussian kernel learning methods. Experimental results show that the proposed method stably achieves both good time-efficiency performance and good classification performance. Copyright © 2014 Elsevier Ltd. All rights reserved.
Discrete-time BAM neural networks with variable delays
NASA Astrophysics Data System (ADS)
Liu, Xin-Ge; Tang, Mei-Lan; Martin, Ralph; Liu, Xin-Bi
2007-07-01
This Letter deals with the global exponential stability of discrete-time bidirectional associative memory (BAM) neural networks with variable delays. Using a Lyapunov functional, and linear matrix inequality techniques (LMI), we derive a new delay-dependent exponential stability criterion for BAM neural networks with variable delays. As this criterion has no extra constraints on the variable delay functions, it can be applied to quite general BAM neural networks with a broad range of time delay functions. It is also easy to use in practice. An example is provided to illustrate the theoretical development.
Functional Quality Criterion of Rock Handling Mechanization at Open-pit Mines
NASA Astrophysics Data System (ADS)
Voronov, Yuri; Voronov, Artyoni
2017-11-01
Overburden and mining operations at open-pit mines are performed mainly by powerful shovel-truck systems (STSs). One of the main problems of the STSs is a rather low level of their operating quality, mainly due to unjustified over-trucking. In this article, a functional criterion for assessing the qualify of the STS operation at open-pit mines is formulated, derived and analyzed. We introduce the rationale and general principles for the functional criterion formation, its general form, as well as variations for various STS structures: a mixed truck fleet and a homogeneous shovel fleet, a mixed shove! fleet and a homogeneous truck fleet, mixed truck and shovel fleets. The possibility of assessing the quality of the STS operation is of great importance for identifying the main directions for improving their operational performance and operating quality, optimizing the main performance indicators by the qualify criterion, and. as a result, for possible saving of material and technical resources for open-pit mining. Improvement of the quality of the STS operation also allows increasing the mining safety and decreasing the atmosphere pollution - by means of possible reducing of the number of the operating trucks.
Soft Clustering Criterion Functions for Partitional Document Clustering
2004-05-26
in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets
NASA Astrophysics Data System (ADS)
Watanabe, Yukio
2018-05-01
In the calculations of tetragonal BaTiO3, some exchange-correlation (XC) energy functionals such as local density approximation (LDA) have shown good agreement with experiments at room temperature (RT), e.g., spontaneous polarization (PS), and superiority compared with other XC functionals. This is due to the error compensation of the RT effect and, hence, will be ineffective in the heavily strained case such as domain boundaries. Here, ferroelectrics under large strain at RT are approximated as those at 0 K because the strain effect surpasses the RT effects. To find effective XC energy functionals for strained BaTiO3, we propose a new comparison, i.e., a criterion. This criterion is the properties at 0 K given by the Ginzburg-Landau (GL) theory because GL theory is a thermodynamic description of experiments working under the same symmetry-constraints as ab initio calculations. With this criterion, we examine LDA, generalized gradient approximations (GGA), meta-GGA, meta-GGA + local correlation potential (U), and hybrid functionals, which reveals the high accuracy of some XC functionals superior to XC functionals that have been regarded as accurate. This result is examined directly by the calculations of homogenously strained tetragonal BaTiO3, confirming the validity of the new criterion. In addition, the data points of theoretical PS vs. certain crystallographic parameters calculated with different XC functionals are found to lie on a single curve, despite their wide variations. Regarding these theoretical data points as corresponding to the experimental results, analytical expressions of the local PS using crystallographic parameters are uncovered. These expressions show the primary origin of BaTiO3 ferroelectricity as oxygen displacements. Elastic compliance and electrostrictive coefficients are estimated. For the comparison of strained results, we show that the effective critical temperature TC under strain <-0.01 is >1000 K from an approximate method combining ab initio results with GL theory. In addition, in a definite manner, the present results show much more enhanced ferroelectricity at large strain than the previous reports.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Tendency for interlaboratory precision in the GMO analysis method based on real-time PCR.
Kodama, Takashi; Kurosawa, Yasunori; Kitta, Kazumi; Naito, Shigehiro
2010-01-01
The Horwitz curve estimates interlaboratory precision as a function only of concentration, and is frequently used as a method performance criterion in food analysis with chemical methods. The quantitative biochemical methods based on real-time PCR require an analogous criterion to progressively promote method validation. We analyzed the tendency of precision using a simplex real-time PCR technique in 53 collaborative studies of seven genetically modified (GM) crops. Reproducibility standard deviation (SR) and repeatability standard deviation (Sr) of the genetically modified organism (GMO) amount (%) was more or less independent of GM crops (i.e., maize, soybean, cotton, oilseed rape, potato, sugar beet, and rice) and evaluation procedure steps. Some studies evaluated whole steps consisting of DNA extraction and PCR quantitation, whereas others focused only on the PCR quantitation step by using DNA extraction solutions. Therefore, SR and Sr for GMO amount (%) are functions only of concentration similar to the Horwitz curve. We proposed S(R) = 0.1971C 0.8685 and S(r) = 0.1478C 0.8424, where C is the GMO amount (%). We also proposed a method performance index in GMO quantitative methods that is analogous to the Horwitz Ratio.
Sullivan, Tami P.; Titus, Jennifer A.; Holt, Laura J.; Swan, Suzanne C.; Fisher, Bonnie S.; Snow, David L.
2010-01-01
This study is among the first attempts to address a frequently articulated, yet unsubstantiated claim that sample inclusion criterion based on women’s physical aggression or victimization will yield different distributions of severity and type of partner violence and injury. Independent samples of African-American women participated in separate studies based on either inclusion criterion of women’s physical aggression or victimization. Between-groups comparisons showed that samples did not differ in physical, sexual, or psychological aggression; physical, sexual, or psychological victimization; inflicted or sustained injury. Therefore, inclusion criterion based on physical aggression or victimization did not yield unique samples of “aggressors” and “victims.” PMID:19949230
Entanglement criterion for tripartite systems based on local sum uncertainty relations
NASA Astrophysics Data System (ADS)
Akbari-Kourbolagh, Y.; Azhdargalam, M.
2018-04-01
We propose a sufficient criterion for the entanglement of tripartite systems based on local sum uncertainty relations for arbitrarily chosen observables of subsystems. This criterion generalizes the tighter criterion for bipartite systems introduced by Zhang et al. [C.-J. Zhang, H. Nha, Y.-S. Zhang, and G.-C. Guo, Phys. Rev. A 81, 012324 (2010), 10.1103/PhysRevA.81.012324] and can be used for both discrete- and continuous-variable systems. It enables us to detect the entanglement of quantum states without having a complete knowledge of them. Its utility is illustrated by some examples of three-qubit, qutrit-qutrit-qubit, and three-mode Gaussian states. It is found that, in comparison with other criteria, this criterion is able to detect some three-qubit bound entangled states more efficiently.
Research of facial feature extraction based on MMC
NASA Astrophysics Data System (ADS)
Xue, Donglin; Zhao, Jiufen; Tang, Qinhong; Shi, Shaokun
2017-07-01
Based on the maximum margin criterion (MMC), a new algorithm of statistically uncorrelated optimal discriminant vectors and a new algorithm of orthogonal optimal discriminant vectors for feature extraction were proposed. The purpose of the maximum margin criterion is to maximize the inter-class scatter while simultaneously minimizing the intra-class scatter after the projection. Compared with original MMC method and principal component analysis (PCA) method, the proposed methods are better in terms of reducing or eliminating the statistically correlation between features and improving recognition rate. The experiment results on Olivetti Research Laboratory (ORL) face database shows that the new feature extraction method of statistically uncorrelated maximum margin criterion (SUMMC) are better in terms of recognition rate and stability. Besides, the relations between maximum margin criterion and Fisher criterion for feature extraction were revealed.
NASA Technical Reports Server (NTRS)
Kalayeh, H. M.; Landgrebe, D. A.
1983-01-01
A criterion which measures the quality of the estimate of the covariance matrix of a multivariate normal distribution is developed. Based on this criterion, the necessary number of training samples is predicted. Experimental results which are used as a guide for determining the number of training samples are included. Previously announced in STAR as N82-28109
When is hub gene selection better than standard meta-analysis?
Langfelder, Peter; Mischel, Paul S; Horvath, Steve
2013-01-01
Since hub nodes have been found to play important roles in many networks, highly connected hub genes are expected to play an important role in biology as well. However, the empirical evidence remains ambiguous. An open question is whether (or when) hub gene selection leads to more meaningful gene lists than a standard statistical analysis based on significance testing when analyzing genomic data sets (e.g., gene expression or DNA methylation data). Here we address this question for the special case when multiple genomic data sets are available. This is of great practical importance since for many research questions multiple data sets are publicly available. In this case, the data analyst can decide between a standard statistical approach (e.g., based on meta-analysis) and a co-expression network analysis approach that selects intramodular hubs in consensus modules. We assess the performance of these two types of approaches according to two criteria. The first criterion evaluates the biological insights gained and is relevant in basic research. The second criterion evaluates the validation success (reproducibility) in independent data sets and often applies in clinical diagnostic or prognostic applications. We compare meta-analysis with consensus network analysis based on weighted correlation network analysis (WGCNA) in three comprehensive and unbiased empirical studies: (1) Finding genes predictive of lung cancer survival, (2) finding methylation markers related to age, and (3) finding mouse genes related to total cholesterol. The results demonstrate that intramodular hub gene status with respect to consensus modules is more useful than a meta-analysis p-value when identifying biologically meaningful gene lists (reflecting criterion 1). However, standard meta-analysis methods perform as good as (if not better than) a consensus network approach in terms of validation success (criterion 2). The article also reports a comparison of meta-analysis techniques applied to gene expression data and presents novel R functions for carrying out consensus network analysis, network based screening, and meta analysis.
Serel Arslan, S; Demir, N; Karaduman, A A
2017-02-01
This study aimed to develop a scale called Tongue Thrust Rating Scale (TTRS), which categorised tongue thrust in children in terms of its severity during swallowing, and to investigate its validity and reliability. The study describes the developmental phase of the TTRS and presented its content and criterion-based validity and interobserver and intra-observer reliability. For content validation, seven experts assessed the steps in the scale over two Delphi rounds. Two physical therapists evaluated videos of 50 children with cerebral palsy (mean age, 57·9 ± 16·8 months), using the TTRS to test criterion-based validity, interobserver and intra-observer reliability. The Karaduman Chewing Performance Scale (KCPS) and Drooling Severity and Frequency Scale (DSFS) were used for criterion-based validity. All the TTRS steps were deemed necessary. The content validity index was 0·857. A very strong positive correlation was found between two examinations by one physical therapist, which indicated intra-observer reliability (r = 0·938, P < 0·001). A very strong positive correlation was also found between the TTRS scores of two physical therapists, indicating interobserver reliability (r = 0·892, P < 0·001). There was also a strong positive correlation between the TTRS and KCPS (r = 0·724, P < 0·001) and a very strong positive correlation between the TTRS scores and DSFS (r = 0·822 and r = 0·755; P < 0·001). These results demonstrated the criterion-based validity of the TTRS. The TTRS is a valid, reliable and clinically easy-to-use functional instrument to document the severity of tongue thrust in children. © 2016 John Wiley & Sons Ltd.
2004-03-01
2-15 2-10. Pitch Tracking Closed Loop System for Gap Criterion...................................... 2-16 2-11. Four Resulting Gap ...Level 1 Minimize Resonance Closed Loop Bode Diagram ( ) ( ) s sCommand θ θ ( ) ( ) s sCommand θ θ BWω 2-16 Gap Criterion...System for Gap Criterion In modern fly-by-wire aircraft, feedback is an integral part of obtaining more desirable closed loop flying qualities
Stochastic simulation by image quilting of process-based geological models
NASA Astrophysics Data System (ADS)
Hoffimann, Júlio; Scheidt, Céline; Barfod, Adrian; Caers, Jef
2017-09-01
Process-based modeling offers a way to represent realistic geological heterogeneity in subsurface models. The main limitation lies in conditioning such models to data. Multiple-point geostatistics can use these process-based models as training images and address the data conditioning problem. In this work, we further develop image quilting as a method for 3D stochastic simulation capable of mimicking the realism of process-based geological models with minimal modeling effort (i.e. parameter tuning) and at the same time condition them to a variety of data. In particular, we develop a new probabilistic data aggregation method for image quilting that bypasses traditional ad-hoc weighting of auxiliary variables. In addition, we propose a novel criterion for template design in image quilting that generalizes the entropy plot for continuous training images. The criterion is based on the new concept of voxel reuse-a stochastic and quilting-aware function of the training image. We compare our proposed method with other established simulation methods on a set of process-based training images of varying complexity, including a real-case example of stochastic simulation of the buried-valley groundwater system in Denmark.
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Crack propagation of brittle rock under high geostress
NASA Astrophysics Data System (ADS)
Liu, Ning; Chu, Weijiang; Chen, Pingzhi
2018-03-01
Based on fracture mechanics and numerical methods, the characteristics and failure criterions of wall rock cracks including initiation, propagation, and coalescence are analyzed systematically under different conditions. In order to consider the interaction among cracks, adopt the sliding model of multi-cracks to simulate the splitting failure of rock in axial compress. The reinforcement of bolts and shotcrete supporting to rock mass can control the cracks propagation well. Adopt both theory analysis and simulation method to study the mechanism of controlling the propagation. The best fixed angle of bolts is calculated. Then use ansys to simulate the crack arrest function of bolt to crack. Analyze the influence of different factors on stress intensity factor. The method offer more scientific and rational criterion to evaluate the splitting failure of underground engineering under high geostress.
A generic bio-economic farm model for environmental and economic assessment of agricultural systems.
Janssen, Sander; Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K
2010-12-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models.
A Generic Bio-Economic Farm Model for Environmental and Economic Assessment of Agricultural Systems
Louhichi, Kamel; Kanellopoulos, Argyris; Zander, Peter; Flichman, Guillermo; Hengsdijk, Huib; Meuter, Eelco; Andersen, Erling; Belhouchette, Hatem; Blanco, Maria; Borkowski, Nina; Heckelei, Thomas; Hecker, Martin; Li, Hongtao; Oude Lansink, Alfons; Stokstad, Grete; Thorne, Peter; van Keulen, Herman; van Ittersum, Martin K.
2010-01-01
Bio-economic farm models are tools to evaluate ex-post or to assess ex-ante the impact of policy and technology change on agriculture, economics and environment. Recently, various BEFMs have been developed, often for one purpose or location, but hardly any of these models are re-used later for other purposes or locations. The Farm System Simulator (FSSIM) provides a generic framework enabling the application of BEFMs under various situations and for different purposes (generating supply response functions and detailed regional or farm type assessments). FSSIM is set up as a component-based framework with components representing farmer objectives, risk, calibration, policies, current activities, alternative activities and different types of activities (e.g., annual and perennial cropping and livestock). The generic nature of FSSIM is evaluated using five criteria by examining its applications. FSSIM has been applied for different climate zones and soil types (criterion 1) and to a range of different farm types (criterion 2) with different specializations, intensities and sizes. In most applications FSSIM has been used to assess the effects of policy changes and in two applications to assess the impact of technological innovations (criterion 3). In the various applications, different data sources, level of detail (e.g., criterion 4) and model configurations have been used. FSSIM has been linked to an economic and several biophysical models (criterion 5). The model is available for applications to other conditions and research issues, and it is open to be further tested and to be extended with new components, indicators or linkages to other models. PMID:21113782
Latent Class Analysis of Incomplete Data via an Entropy-Based Criterion
Larose, Chantal; Harel, Ofer; Kordas, Katarzyna; Dey, Dipak K.
2016-01-01
Latent class analysis is used to group categorical data into classes via a probability model. Model selection criteria then judge how well the model fits the data. When addressing incomplete data, the current methodology restricts the imputation to a single, pre-specified number of classes. We seek to develop an entropy-based model selection criterion that does not restrict the imputation to one number of clusters. Simulations show the new criterion performing well against the current standards of AIC and BIC, while a family studies application demonstrates how the criterion provides more detailed and useful results than AIC and BIC. PMID:27695391
Fuzzy approaches to supplier selection problem
NASA Astrophysics Data System (ADS)
Ozkok, Beyza Ahlatcioglu; Kocken, Hale Gonce
2013-09-01
Supplier selection problem is a multi-criteria decision making problem which includes both qualitative and quantitative factors. In the selection process many criteria may conflict with each other, therefore decision-making process becomes complicated. In this study, we handled the supplier selection problem under uncertainty. In this context; we used minimum criterion, arithmetic mean criterion, regret criterion, optimistic criterion, geometric mean and harmonic mean. The membership functions created with the help of the characteristics of used criteria, and we tried to provide consistent supplier selection decisions by using these memberships for evaluating alternative suppliers. During the analysis, no need to use expert opinion is a strong aspect of the methodology used in the decision-making.
An approximate spin design criterion for monoplanes, 1 May 1939
NASA Technical Reports Server (NTRS)
Seidman, O.; Donlan, C. J.
1976-01-01
An approximate empirical criterion, based on the projected side area and the mass distribution of the airplane, was formulated. The British results were analyzed and applied to American designs. A simpler design criterion, based solely on the type and the dimensions of the tail, was developed; it is useful in a rapid estimation of whether a new design is likely to comply with the minimum requirements for safety in spinning.
Individual differences in metacontrast masking regarding sensitivity and response bias.
Albrecht, Thorsten; Mattler, Uwe
2012-09-01
In metacontrast masking target visibility is modulated by the time until a masking stimulus appears. The effect of this temporal delay differs across participants in such a way that individual human observers' performance shows distinguishable types of masking functions which remain largely unchanged for months. Here we examined whether individual differences in masking functions depend on different response criteria in addition to differences in discrimination sensitivity. To this end we reanalyzed previously published data and conducted a new experiment for further data analyses. Our analyses demonstrate that a distinction of masking functions based on the type of masking stimulus is superior to a distinction based on the target-mask congruency. Individually different masking functions are based on individual differences in discrimination sensitivities and in response criteria. Results suggest that individual differences in metacontrast masking result from individually different criterion contents. Copyright © 2012 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Albeverio, Sergio; Chen Kai; Fei Shaoming
A necessary separability criterion that relates the structures of the total density matrix and its reductions is given. The method used is based on the realignment method [K. Chen and L. A. Wu, Quant. Inf. Comput. 3, 193 (2003)]. The separability criterion naturally generalizes the reduction separability criterion introduced independently in the previous work [M. Horodecki and P. Horodecki, Phys. Rev. A 59, 4206 (1999) and N. J. Cerf, C. Adami, and R. M. Gingrich, Phys. Rev. A 60, 898 (1999)]. In special cases, it recovers the previous reduction criterion and the recent generalized partial transposition criterion [K. Chen andmore » L. A. Wu, Phys. Lett. A 306, 14 (2002)]. The criterion involves only simple matrix manipulations and can therefore be easily applied.« less
Putting the Biological Species Concept to the Test: Using Mating Networks to Delimit Species
Lagache, Lélia; Leger, Jean-Benoist; Daudin, Jean-Jacques; Petit, Rémy J.; Vacher, Corinne
2013-01-01
Although interfertility is the key criterion upon which Mayr’s biological species concept is based, it has never been applied directly to delimit species under natural conditions. Our study fills this gap. We used the interfertility criterion to delimit two closely related oak species in a forest stand by analyzing the network of natural mating events between individuals. The results reveal two groups of interfertile individuals connected by only few mating events. These two groups were largely congruent with those determined using other criteria (morphological similarity, genotypic similarity and individual relatedness). Our study, therefore, shows that the analysis of mating networks is an effective method to delimit species based on the interfertility criterion, provided that adequate network data can be assembled. Our study also shows that although species boundaries are highly congruent across methods of species delimitation, they are not exactly the same. Most of the differences stem from assignment of individuals to an intermediate category. The discrepancies between methods may reflect a biological reality. Indeed, the interfertility criterion is an environment-dependant criterion as species abundances typically affect rates of hybridization under natural conditions. Thus, the methods of species delimitation based on the interfertility criterion are expected to give results slightly different from those based on environment-independent criteria (such as the genotypic similarity criteria). However, whatever the criterion chosen, the challenge we face when delimiting species is to summarize continuous but non-uniform variations in biological diversity. The grade of membership model that we use in this study appears as an appropriate tool. PMID:23818990
Effgen, Susan K; McCoy, Sarah Westcott; Chiarello, Lisa A; Jeffries, Lynn M; Starnes, Catherine; Bush, Heather M
2016-01-01
To describe School Function Assessment (SFA) outcomes after 6 months of school-based physical therapy and the effects of age and gross motor function on outcomes. Within 28 states, 109 physical therapists and 296 of their students with disabilities, ages 5 to 12 years, participated. After training, therapists completed 10 SFA scales on students near the beginning and end of the school year. Criterion scores for many students remained stable (46%-59%) or improved (37%-51%) with the most students improving in Participation and Maintaining/Changing Positions. Students aged 5 to 7 years showed greater change than 8- to 12-year-olds on 5 scales. Students with higher gross motor function (Gross Motor Function Classification System levels I vs IV/V and II/III vs IV/V) showed greater change on 9 scales. Positive SFA change was recorded in students receiving school-based physical therapy; however, the SFA is less sensitive for older students and those with lower functional movement.
Signal detection with criterion noise: applications to recognition memory.
Benjamin, Aaron S; Diaz, Michael; Wee, Serena
2009-01-01
A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of discrimination incorporating criterion variability are derived, and the model's relationship with extant models of decision making in discrimination tasks is examined. An experiment evaluating recognition memory for ensembles of word stimuli revealed that criterion noise is not trivial in magnitude and contributes substantially to variance in the slope of the isosensitivity function. The authors discuss how ND-TSD can help explain a number of current and historical puzzles in recognition memory, including the inconsistent relationship between manipulations of learning and the isosensitivity function's slope, the lack of invariance of the slope with manipulations of bias or payoffs, the effects of aging on the decision-making process in recognition, and the nature of responding in remember-know decision tasks. ND-TSD poses novel, theoretically meaningful constraints on theories of recognition and decision making more generally, and provides a mechanism for rapprochement between theories of decision making that employ deterministic response rules and those that postulate probabilistic response rules.
On making laboratory report work more meaningful through criterion-based evaluation.
Naeraa, N
1987-05-01
The purpose of this work was to encourage students to base their laboratory report work on guidelines reflecting a quality criterion set, previously derived from the functional role of the various sections in scientific papers. The materials were developed by a trial-and-error approach and comprise learning objectives, a parallel structure of manual and reports, general and specific report guidelines and a new common starting experiment. The principal contents are presented, followed by an account of the author's experience with them. Most of the author's students now follow the guidelines. Their conclusions are affected by difficulties in adjusting expected results with due regard to the specific conditions of the experimental subject or to their own deviations from the experimental or analytical procedures prescribed in the manual. Also, problems in interpreting data unbiased by explicit expectations are evident, although a clear distinction between expected and actual results has been helpful for them in seeing the relationship between experiments and textbook contents more clearly, and thus in understanding the hypothetico-deductive approach.
Qin, Zong; Ji, Chuangang; Wang, Kai; Liu, Sheng
2012-10-08
In this paper, condition for uniform lighting generated by light emitting diode (LED) array was systematically studied. To take human vision effect into consideration, contrast sensitivity function (CSF) was novelly adopted as critical criterion for uniform lighting instead of conventionally used Sparrow's Criterion (SC). Through CSF method, design parameters including system thickness, LED pitch, LED's spatial radiation distribution and viewing condition can be analytically combined. In a specific LED array lighting system (LALS) with foursquare LED arrangement, different types of LEDs (Lambertian and Batwing type) and given viewing condition, optimum system thicknesses and LED pitches were calculated and compared with those got through SC method. Results show that CSF method can achieve more appropriate optimum parameters than SC method. Additionally, an abnormal phenomenon that uniformity varies with structural parameters non-monotonically in LALS with non-Lambertian LEDs was found and analyzed. Based on the analysis, a design method of LALS that can bring about better practicability, lower cost and more attractive appearance was summarized.
Complex motion measurement using genetic algorithm
NASA Astrophysics Data System (ADS)
Shen, Jianjun; Tu, Dan; Shen, Zhenkang
1997-12-01
Genetic algorithm (GA) is an optimization technique that provides an untraditional approach to deal with many nonlinear, complicated problems. The notion of motion measurement using genetic algorithm arises from the fact that the motion measurement is virtually an optimization process based on some criterions. In the paper, we propose a complex motion measurement method using genetic algorithm based on block-matching criterion. The following three problems are mainly discussed and solved in the paper: (1) apply an adaptive method to modify the control parameters of GA that are critical to itself, and offer an elitism strategy at the same time (2) derive an evaluate function of motion measurement for GA based on block-matching technique (3) employ hill-climbing (HC) method hybridly to assist GA's search for the global optimal solution. Some other related problems are also discussed. At the end of paper, experiments result is listed. We employ six motion parameters for measurement in our experiments. Experiments result shows that the performance of our GA is good. The GA can find the object motion accurately and rapidly.
Ayral, Thomas; Vučičević, Jaksa; Parcollet, Olivier
2017-10-20
We present an embedded-cluster method, based on the triply irreducible local expansion formalism. It turns the Fierz ambiguity, inherent to approaches based on a bosonic decoupling of local fermionic interactions, into a convergence criterion. It is based on the approximation of the three-leg vertex by a coarse-grained vertex computed from a self-consistently determined cluster impurity model. The computed self-energies are, by construction, continuous functions of momentum. We show that, in three interaction and doping regimes of the two-dimensional Hubbard model, self-energies obtained with clusters of size four only are very close to numerically exact benchmark results. We show that the Fierz parameter, which parametrizes the freedom in the Hubbard-Stratonovich decoupling, can be used as a quality control parameter. By contrast, the GW+extended dynamical mean field theory approximation with four cluster sites is shown to yield good results only in the weak-coupling regime and for a particular decoupling. Finally, we show that the vertex has spatially nonlocal components only at low Matsubara frequencies.
Meeting the criteria of a nursing diagnosis classification: Evaluation of ICNP, ICF, NANDA and ZEFP.
Müller-Staub, Maria; Lavin, Mary Ann; Needham, Ian; van Achterberg, Theo
2007-07-01
Few studies described nursing diagnosis classification criteria and how classifications meet these criteria. The purpose was to identify criteria for nursing diagnosis classifications and to assess how these criteria are met by different classifications. First, a literature review was conducted (N=50) to identify criteria for nursing diagnoses classifications and to evaluate how these criteria are met by the International Classification of Nursing Practice (ICNP), the International Classification of Functioning, Disability and Health (ICF), the International Nursing Diagnoses Classification (NANDA), and the Nursing Diagnostic System of the Centre for Nursing Development and Research (ZEFP). Using literature review based general and specific criteria, the principal investigator evaluated each classification, applying a matrix. Second, a convenience sample of 20 nursing experts from different Swiss care institutions answered standardized interview forms, querying current national and international classification state and use. The first general criterion is that a diagnosis classification should describe the knowledge base and subject matter for which the nursing profession is responsible. ICNP) and NANDA meet this goal. The second general criterion is that each class fits within a central concept. The ICF and NANDA are the only two classifications built on conceptually driven classes. The third general classification criterion is that each diagnosis possesses a description, diagnostic criteria, and related etiologies. Although ICF and ICNP describe diagnostic terms, only NANDA fulfils this criterion. The analysis indicated that NANDA fulfilled most of the specific classification criteria in the matrix. The nursing experts considered NANDA to be the best-researched and most widely implemented classification in Switzerland and internationally. The international literature and the opinion of Swiss expert nurses indicate that-from the perspective of classifying comprehensive nursing diagnoses-NANDA should be recommended for nursing practice and electronic nursing documentation. Study limitations and future research needs are discussed.
Estimation of a Stopping Criterion for Geophysical Granular Flows Based on Numerical Experimentation
NASA Astrophysics Data System (ADS)
Yu, B.; Dalbey, K.; Bursik, M.; Patra, A.; Pitman, E. B.
2004-12-01
Inundation area may be the most important factor for mitigation of natural hazards related to avalanches, debris flows, landslides and pyroclastic flows. Run-out distance is the key parameter for inundation because the front deposits define the leading edge of inundation. To define the run-out distance, it is necessary to know when a flow stops. Numerical experiments are presented for determining a stopping criterion and exploring the suitability of a Savage-Hutter granular model for computing inundation areas of granular flows. The TITAN2D model was employed to run numerical experiments based on the Savage-Hutter theory. A potentially reasonable stopping criterion was found as a function of dimensionless average velocity, aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Slumping piles on a horizontal surface and geophysical flows over complex topography were simulated. Several mountainous areas, including Colima volcano (MX), Casita (Nic.), Little Tahoma Peak (WA, USA) and the San Bernardino Mountains (CA, USA) were used to simulate geophysical flows. Volcanic block and ash flows, debris avalanches and debris flows occurred in these areas and caused varying degrees of damage. The areas have complex topography, including locally steep open slopes, sinuous channels, and combinations of these. With different topography and physical scaling, slumping piles and geophysical flows have a somewhat different dependence of dimensionless stopping velocity on power-law constants associated with aspect ratio of pile, internal friction angle, bed friction angle and bed slope in the flow direction. Visual comparison of the details of the inundation area obtained from the TITAN2D model with models that contain some form of viscous dissipation point out weaknesses in the model that are not evident by investigation of the stopping criterion alone.
Signal Detection with Criterion Noise: Applications to Recognition Memory
ERIC Educational Resources Information Center
Benjamin, Aaron S.; Diaz, Michael; Wee, Serena
2009-01-01
A tacit but fundamental assumption of the theory of signal detection is that criterion placement is a noise-free process. This article challenges that assumption on theoretical and empirical grounds and presents the noisy decision theory of signal detection (ND-TSD). Generalized equations for the isosensitivity function and for measures of…
Tseng, Yi-Ju; Wu, Jung-Hsuan; Ping, Xiao-Ou; Lin, Hui-Chi; Chen, Ying-Yu; Shang, Rung-Ji; Chen, Ming-Yuan; Lai, Feipei
2012-01-01
Background The emergence and spread of multidrug-resistant organisms (MDROs) are causing a global crisis. Combating antimicrobial resistance requires prevention of transmission of resistant organisms and improved use of antimicrobials. Objectives To develop a Web-based information system for automatic integration, analysis, and interpretation of the antimicrobial susceptibility of all clinical isolates that incorporates rule-based classification and cluster analysis of MDROs and implements control chart analysis to facilitate outbreak detection. Methods Electronic microbiological data from a 2200-bed teaching hospital in Taiwan were classified according to predefined criteria of MDROs. The numbers of organisms, patients, and incident patients in each MDRO pattern were presented graphically to describe spatial and time information in a Web-based user interface. Hierarchical clustering with 7 upper control limits (UCL) was used to detect suspicious outbreaks. The system’s performance in outbreak detection was evaluated based on vancomycin-resistant enterococcal outbreaks determined by a hospital-wide prospective active surveillance database compiled by infection control personnel. Results The optimal UCL for MDRO outbreak detection was the upper 90% confidence interval (CI) using germ criterion with clustering (area under ROC curve (AUC) 0.93, 95% CI 0.91 to 0.95), upper 85% CI using patient criterion (AUC 0.87, 95% CI 0.80 to 0.93), and one standard deviation using incident patient criterion (AUC 0.84, 95% CI 0.75 to 0.92). The performance indicators of each UCL were statistically significantly higher with clustering than those without clustering in germ criterion (P < .001), patient criterion (P = .04), and incident patient criterion (P < .001). Conclusion This system automatically identifies MDROs and accurately detects suspicious outbreaks of MDROs based on the antimicrobial susceptibility of all clinical isolates. PMID:23195868
On computing Gröbner bases in rings of differential operators
NASA Astrophysics Data System (ADS)
Ma, Xiaodong; Sun, Yao; Wang, Dingkang
2011-05-01
Insa and Pauer presented a basic theory of Groebner basis for differential operators with coefficients in a commutative ring in 1998, and a criterion was proposed to determine if a set of differential operators is a Groebner basis. In this paper, we will give a new criterion such that Insa and Pauer's criterion could be concluded as a special case and one could compute the Groebner basis more efficiently by this new criterion.
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Criterion-Referenced Testing and Measurement: A Review of Technical Issues and Developments.
ERIC Educational Resources Information Center
Hambleton, Ronald K.; And Others
The success of objectives-based programs depends to a considerable extent on how effectively students and teachers assess mastery of objectives and make decisions for future instruction. While educators disagree on the usefulness of criterion-referenced tests the position taken in this monograph is that criterion-referenced tests are useful, and…
Jou, Jerwen; Escamilla, Eric E; Arredondo, Mario L; Pena, Liann; Zuniga, Richard; Perez, Martin; Garcia, Clarissa
2018-02-01
How much of the Deese-Roediger-McDermott (DRM) false memory is attributable to decision criterion is so far a controversial issue. Previous studies typically used explicit warnings against accepting the critical lure to investigate this issue. The assumption is that if the false memory results from using a liberally biased criterion, it should be greatly reduced or eliminated by an explicit warning against accepting the critical lure. Results showed that warning was generally ineffective. We asked the question of whether subjects can substantially reduce false recognition without being warned when the test forces them to make a distinction between true and false memories. Using a two-alternative forced choice in which criterion plays a relatively smaller role, we showed that subjects could indeed greatly reduce the rate of false recognition. However, when the forced-choice restriction was removed from the two-item choice test, the rate of false recognition rebounded to that of the hit for studied list words, indicating the role of criterion in false recognition.
Nash equilibrium and multi criterion aerodynamic optimization
NASA Astrophysics Data System (ADS)
Tang, Zhili; Zhang, Lianhe
2016-06-01
Game theory and its particular Nash Equilibrium (NE) are gaining importance in solving Multi Criterion Optimization (MCO) in engineering problems over the past decade. The solution of a MCO problem can be viewed as a NE under the concept of competitive games. This paper surveyed/proposed four efficient algorithms for calculating a NE of a MCO problem. Existence and equivalence of the solution are analyzed and proved in the paper based on fixed point theorem. Specific virtual symmetric Nash game is also presented to set up an optimization strategy for single objective optimization problems. Two numerical examples are presented to verify proposed algorithms. One is mathematical functions' optimization to illustrate detailed numerical procedures of algorithms, the other is aerodynamic drag reduction of civil transport wing fuselage configuration by using virtual game. The successful application validates efficiency of algorithms in solving complex aerodynamic optimization problem.
Evaluation of Regression Models of Balance Calibration Data Using an Empirical Criterion
NASA Technical Reports Server (NTRS)
Ulbrich, Norbert; Volden, Thomas R.
2012-01-01
An empirical criterion for assessing the significance of individual terms of regression models of wind tunnel strain gage balance outputs is evaluated. The criterion is based on the percent contribution of a regression model term. It considers a term to be significant if its percent contribution exceeds the empirical threshold of 0.05%. The criterion has the advantage that it can easily be computed using the regression coefficients of the gage outputs and the load capacities of the balance. First, a definition of the empirical criterion is provided. Then, it is compared with an alternate statistical criterion that is widely used in regression analysis. Finally, calibration data sets from a variety of balances are used to illustrate the connection between the empirical and the statistical criterion. A review of these results indicated that the empirical criterion seems to be suitable for a crude assessment of the significance of a regression model term as the boundary between a significant and an insignificant term cannot be defined very well. Therefore, regression model term reduction should only be performed by using the more universally applicable statistical criterion.
A fast and efficient segmentation scheme for cell microscopic image.
Lebrun, G; Charrier, C; Lezoray, O; Meurie, C; Cardot, H
2007-04-27
Microscopic cellular image segmentation schemes must be efficient for reliable analysis and fast to process huge quantity of images. Recent studies have focused on improving segmentation quality. Several segmentation schemes have good quality but processing time is too expensive to deal with a great number of images per day. For segmentation schemes based on pixel classification, the classifier design is crucial since it is the one which requires most of the processing time necessary to segment an image. The main contribution of this work is focused on how to reduce the complexity of decision functions produced by support vector machines (SVM) while preserving recognition rate. Vector quantization is used in order to reduce the inherent redundancy present in huge pixel databases (i.e. images with expert pixel segmentation). Hybrid color space design is also used in order to improve data set size reduction rate and recognition rate. A new decision function quality criterion is defined to select good trade-off between recognition rate and processing time of pixel decision function. The first results of this study show that fast and efficient pixel classification with SVM is possible. Moreover posterior class pixel probability estimation is easy to compute with Platt method. Then a new segmentation scheme using probabilistic pixel classification has been developed. This one has several free parameters and an automatic selection must dealt with, but criteria for evaluate segmentation quality are not well adapted for cell segmentation, especially when comparison with expert pixel segmentation must be achieved. Another important contribution in this paper is the definition of a new quality criterion for evaluation of cell segmentation. The results presented here show that the selection of free parameters of the segmentation scheme by optimisation of the new quality cell segmentation criterion produces efficient cell segmentation.
Aggen, S. H.; Neale, M. C.; Røysamb, E.; Reichborn-Kjennerud, T.; Kendler, K. S.
2009-01-01
Background Despite its importance as a paradigmatic personality disorder, little is known about the measurement invariance of the DSM-IV borderline personality disorder (BPD) criteria ; that is, whether the criteria assess the disorder equivalently across different groups. Method BPD criteria were evaluated at interview in 2794 young adult Norwegian twins. Analyses, based on item-response modeling, were conducted to test for differential age and sex moderation of the individual BPD criteria characteristics given factor-level covariate effects. Results Confirmatory factor analytic results supported a unidimensional structure for the nine BPD criteria. Compared to males, females had a higher BPD factor mean, larger factor variance and there was a significant age by sex interaction on the factor mean. Strong differential sex and age by sex interaction effects were found for the ‘ impulsivity ’ criterion factor loading and threshold. Impulsivity related to the BPD factor poorly in young females but improved significantly in older females. Males reported more impulsivity compared to females and this difference increased with age. The ‘ affective instability ’ threshold was also moderated, with males reporting less than expected. Conclusions The results suggest the DSM-IV BPD ‘ impulsivity ’ and ‘ affective instability ’ criteria function differentially with respect to age and sex, with impulsivity being especially problematic. If verified, these findings have important implications for the interpretation of prior research with these criteria. These non-invariant age and sex effects may be identifying criteria-level expression features relevant to BPD nosology and etiology. Criterion functioning assessed using modern psychometric methods should be considered in the development of DSM-V. PMID:19400977
Dueñas, María; Mendonça, Liliane; Sampaio, Rute; Gouvinhas, Cláudia; Oliveira, Daniela; Castro-Lopes, José Manuel; Azevedo, Luís Filipe
2017-03-01
The Bowel Function Index (BFI) is a simple and sound bowel function and opioid-induced constipation (OIC) screening tool. We aimed to develop the translation and cultural adaptation of this measure (BFI-P) and to assess its reliability and validity for the Portuguese language and a chronic pain population. The BFI-P was created after a process including translation, back translation and cultural adaptation. Participants (n = 226) were recruited in a chronic pain clinic and were assessed at baseline and after one week. Internal consistency, test-retest reliability, responsiveness, construct (convergent and known groups) and factorial validity were assessed. Test-retest reliability had an intra-class correlation of 0.605 for BFI mean score. Internal consistency of BFI had Cronbach's alpha of 0.865. The construct validity of BFI-P was shown to be excellent and the exploratory factor analysis confirmed its unidimensional structure. The responsiveness of BFI-P was excellent, with a suggested 17-19 point and 8-12 point change in score constituting a clinically relevant change in constipation for patients with and without previous constipation, respectively. This study had some limitations, namely, the criterion validity of BFI-P was not directly assessed; and the absence of a direct criterion for OIC precluded the assessment of the criterion based responsiveness of BFI-P. Nevertheless, BFI may importantly contribute to better OIC screening and its Portuguese version (BFI-P) has been shown to have excellent reliability, internal consistency, validity and responsiveness. Further suggestions regarding statistically and clinically important change cut-offs for this instrument are presented.
Interference coupling analysis based on a hybrid method: application to a radio telescope system
NASA Astrophysics Data System (ADS)
Xu, Qing-Lin; Qiu, Yang; Tian, Jin; Liu, Qi
2018-02-01
Working in a way that passively receives electromagnetic radiation from a celestial body, a radio telescope can be easily disturbed by external radio frequency interference as well as electromagnetic interference generated by electric and electronic components operating at the telescope site. A quantitative analysis of these interferences must be taken into account carefully for further electromagnetic protection of the radio telescope. In this paper, based on electromagnetic topology theory, a hybrid method that combines the Baum-Liu-Tesche (BLT) equation and transfer function is proposed. In this method, the coupling path of the radio telescope is divided into strong coupling and weak coupling sub-paths, and the coupling intensity criterion is proposed by analyzing the conditions in which the BLT equation simplifies to a transfer function. According to the coupling intensity criterion, the topological model of a typical radio telescope system is established. The proposed method is used to solve the interference response of the radio telescope system by analyzing subsystems with different coupling modes separately and then integrating the responses of the subsystems as the response of the entire system. The validity of the proposed method is verified numerically. The results indicate that the proposed method, compared with the direct solving method, reduces the difficulty and improves the efficiency of interference prediction.
Batch Mode Reinforcement Learning based on the Synthesis of Artificial Trajectories
Fonteneau, Raphael; Murphy, Susan A.; Wehenkel, Louis; Ernst, Damien
2013-01-01
In this paper, we consider the batch mode reinforcement learning setting, where the central problem is to learn from a sample of trajectories a policy that satisfies or optimizes a performance criterion. We focus on the continuous state space case for which usual resolution schemes rely on function approximators either to represent the underlying control problem or to represent its value function. As an alternative to the use of function approximators, we rely on the synthesis of “artificial trajectories” from the given sample of trajectories, and show that this idea opens new avenues for designing and analyzing algorithms for batch mode reinforcement learning. PMID:24049244
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
Design of vibration isolation systems using multiobjective optimization techniques
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The design of vibration isolation systems is considered using multicriteria optimization techniques. The integrated values of the square of the force transmitted to the main mass and the square of the relative displacement between the main mass and the base are taken as the performance indices. The design of a three degrees-of-freedom isolation system with an exponentially decaying type of base disturbance is considered for illustration. Numerical results are obtained using the global criterion, utility function, bounded objective, lexicographic, goal programming, goal attainment and game theory methods. It is found that the game theory approach is superior in finding a better optimum solution with proper balance of the various objective functions.
Hsu, Bing-Cheng
2018-01-01
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme. PMID:29757940
Lin, Chi-Ying; Hsu, Bing-Cheng
2018-05-14
Waxing is an important aspect of automobile detailing, aimed at protecting the finish of the car and preventing rust. At present, this delicate work is conducted manually due to the need for iterative adjustments to achieve acceptable quality. This paper presents a robotic waxing system in which surface images are used to evaluate the quality of the finish. An RGB-D camera is used to build a point cloud that details the sheet metal components to enable path planning for a robot manipulator. The robot is equipped with a multi-axis force sensor to measure and control the forces involved in the application and buffing of wax. Images of sheet metal components that were waxed by experienced car detailers were analyzed using image processing algorithms. A Gaussian distribution function and its parameterized values were obtained from the images for use as a performance criterion in evaluating the quality of surfaces prepared by the robotic waxing system. Waxing force and dwell time were optimized using a mathematical model based on the image-based criterion used to measure waxing performance. Experimental results demonstrate the feasibility of the proposed robotic waxing system and image-based performance evaluation scheme.
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
He, Xu; Tuo, Rui; Jeff Wu, C. F.
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Optimization of Multi-Fidelity Computer Experiments via the EQIE Criterion
He, Xu; Tuo, Rui; Jeff Wu, C. F.
2017-01-31
Computer experiments based on mathematical models are powerful tools for understanding physical processes. This article addresses the problem of kriging-based optimization for deterministic computer experiments with tunable accuracy. Our approach is to use multi- delity computer experiments with increasing accuracy levels and a nonstationary Gaussian process model. We propose an optimization scheme that sequentially adds new computer runs by following two criteria. The first criterion, called EQI, scores candidate inputs with given level of accuracy, and the second criterion, called EQIE, scores candidate combinations of inputs and accuracy. Here, from simulation results and a real example using finite element analysis,more » our method out-performs the expected improvement (EI) criterion which works for single-accuracy experiments.« less
Rhodes, Matthew G; Jacoby, Larry L
2007-03-01
The authors examined whether participants can shift their criterion for recognition decisions in response to the probability that an item was previously studied. Participants in 3 experiments were given recognition tests in which the probability that an item was studied was correlated with its location during the test. Results from all 3 experiments indicated that participants' response criteria were sensitive to the probability that an item was previously studied and that shifts in criterion were robust. In addition, awareness of the bases for criterion shifts and feedback on performance were key factors contributing to the observed shifts in decision criteria. These data suggest that decision processes can operate in a dynamic fashion, shifting from item to item.
Experiment design for pilot identification in compensatory tracking tasks
NASA Technical Reports Server (NTRS)
Wells, W. R.
1976-01-01
A design criterion for input functions in laboratory tracking tasks resulting in efficient parameter estimation is formulated. The criterion is that the statistical correlations between pairs of parameters be reduced in order to minimize the problem of nonuniqueness in the extraction process. The effectiveness of the method is demonstrated for a lower order dynamic system.
ERIC Educational Resources Information Center
Simpson, Mary; Arnold, Brian
1983-01-01
Suggests that failure to learn is often the result of inappropriateness of level of instruction and deficiencies in instructional procedures and educational strategies, and differentiates between the functions of criterion referenced tests and diagnostic tests. Results are reported from two studies of the teaching of osmosis and photosynthesis.…
Linking Health Concepts in the Assessment and Evaluation of Water Distribution Systems
ERIC Educational Resources Information Center
Karney, Bryan W.; Filion, Yves R.
2005-01-01
The concept of health is not only a specific criterion for evaluation of water quality delivered by a distribution system but also a suitable paradigm for overall functioning of the hydraulic and structural components of the system. This article views health, despite its complexities, as the only criterion with suitable depth and breadth to allow…
Vortex identification from local properties of the vorticity field
NASA Astrophysics Data System (ADS)
Elsas, J. H.; Moriconi, L.
2017-01-01
A number of systematic procedures for the identification of vortices/coherent structures have been developed as a way to address their possible kinematical and dynamical roles in structural formulations of turbulence. It has been broadly acknowledged, however, that vortex detection algorithms, usually based on linear-algebraic properties of the velocity gradient tensor, can be plagued with severe shortcomings and may become, in practical terms, dependent on the choice of subjective threshold parameters in their implementations. In two-dimensions, a large class of standard vortex identification prescriptions turn out to be equivalent to the "swirling strength criterion" (λc i-criterion), which is critically revisited in this work. We classify the instances where the accuracy of the λc i-criterion is affected by nonlinear superposition effects and propose an alternative vortex detection scheme based on the local curvature properties of the vorticity graph (x ,y ,ω ) —the "vorticity curvature criterion" (λω-criterion)—which improves over the results obtained with the λc i-criterion in controlled Monte Carlo tests. A particularly problematic issue, given its importance in wall-bounded flows, is the eventual inadequacy of the λc i-criterion for many-vortex configurations in the presence of strong background shear. We show that the λω-criterion is able to cope with these cases as well, if a subtraction of the mean velocity field background is performed, in the spirit of the Reynolds decomposition procedure. A realistic comparative study for vortex identification is then carried out for a direct numerical simulation of a turbulent channel flow, including a three-dimensional extension of the λω-criterion. In contrast to the λc i-criterion, the λω-criterion indicates in a consistent way the existence of small scale isotropic turbulent fluctuations in the logarithmic layer, in consonance with long-standing assumptions commonly taken in turbulent boundary layer phenomenology.
Frank, Matthias; Bockholdt, Britta; Peters, Dieter; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Hinz, Peter
2011-05-20
Blunt ballistic impact trauma is a current research topic due to the widespread use of kinetic energy munitions in law enforcement. In the civilian setting, an automatic dummy launcher has recently been identified as source of blunt impact trauma. However, there is no data on the injury risk of conventional dummy launchers. It is the aim of this investigation to predict potential impact injury to the human head and chest on the basis of the Blunt Criterion which is an energy based blunt trauma model to assess vulnerability to blunt weapons, projectile impacts, and behind-armor-exposures. Based on experimentally investigated kinetic parameters, the injury risk of two commercially available gundog retrieval devices (Waidwerk Telebock, Germany; Turner Richards, United Kingdom) was assessed using the Blunt Criterion trauma model for blunt ballistic impact trauma to the head and chest. Assessing chest impact, the Blunt Criterion values for both shooting devices were higher than the critical Blunt Criterion value of 0.37, which represents a 50% risk of sustaining a thoracic skeletal injury of AIS 2 (moderate injury) or AIS 3 (serious injury). The maximum Blunt Criterion value (1.106) was higher than the Blunt Criterion value corresponding to AIS 4 (severe injury). With regard to the impact injury risk to the head, both devices surpass by far the critical Blunt Criterion value of 1.61, which represents a 50% risk of skull fracture. Highest Blunt Criterion values were measured for the Turner Richards Launcher (2.884) corresponding to a risk of skull fracture of higher than 80%. Even though the classification as non-guns by legal authorities might implicate harmlessness, the Blunt Criterion trauma model illustrates the hazardous potential of these shooting devices. The Blunt Criterion trauma model links the laboratory findings to the impact injury patterns of the head and chest that might be expected. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Swedish PE Teachers Struggle with Assessment in a Criterion-Referenced Grading System
ERIC Educational Resources Information Center
Svennberg, Lena; Meckbach, Jane; Redelius, Karin
2018-01-01
In the field of education, the international trend is to turn to criterion-referenced grading in the hope of achieving accountable and consistent grades. Despite a national criterion-referenced grading system emphasising knowledge as the only base for grading, Swedish physical education (PE) grades have been shown to value non-knowledge factors,…
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
NASA Astrophysics Data System (ADS)
Bai, Yan-Kui; Li, Shu-Shen; Zheng, Hou-Zhi
2005-11-01
We present a method for checking the Peres separability criterion in an arbitrary bipartite quantum state ρAB within local operations and classical communication scenario. The method does not require noise operation which is needed in making the partial transposition map physically implementable. The main task for the two observers, Alice and Bob, is to measure some specific functions of the partial transposed matrix. With these functions, they can determine the eigenvalues of ρABTB , among which the minimum serves as an entanglement witness.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zheng, Yuesheng, E-mail: yueshengzheng@fzu.edu.cn; Zhang, Bo, E-mail: shizbcn@tsinghua.edu.cn; He, Jinliang, E-mail: hejl@tsinghua.edu.cn
The positive dc corona plasmas between coaxial cylinders in air under the application of a self-sustained criterion with photoionization are investigated in this paper. A photon absorption function suitable for cylindrical electrode, which can characterize the total photons within the ionization region, is proposed on the basis of the classic corona onset criteria. Based on the general fluid model with the self-sustained criterion, the role of photoionization in the ionization region is clarified. It is found that the surface electric field keeps constant under a relatively low corona current, while it is slightly weakened with the increase of the coronamore » current. Similar tendencies can be found under different conductor radii and relative air densities. The small change of the surface electric field will become more significant for the electron density distribution as well as the ionization activity under a high corona current, compared with the results under the assumption of a constant surface field. The assumption that the surface electric field remains constant should be corrected with the increase of the corona current when the energetic electrons with a distance from the conductor surface are concerned.« less
Brenninkmeijer, V; VanYperen, N
2003-01-01
When conducting research on burnout, it may be difficult to decide whether one should report results separately for each burnout dimension or whether one should combine the dimensions. Although the multidimensionality of the burnout concept is widely acknowledged, for research purposes it is sometimes convenient to regard burnout as a unidimensional construct. This article deals with the question of whether and when it may be appropriate to treat burnout as a unidimensional variable, and presents a decision rule to distinguish between people high and low in burnout. To develop a guideline for obtaining a dichotomous measure of burnout, the scores on the Utrecht Burnout Scale (UBOS) of 44 well functioning individuals were compared with the scores of 29 individuals diagnosed as suffering from burnout. Based on these data, the authors recommend the "exhaustion + 1" criterion for research in non-clinical populations. Following this criterion, individuals can be considered as burnt out when they report, compared to a norm group, high emotional exhaustion, in combination with high depersonalisation or low personal accomplishment. The criterion may be used to estimate the percentage in a sample of individuals in a state of burnout. PMID:12782742
High throughput nonparametric probability density estimation.
Farmer, Jenny; Jacobs, Donald
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference.
High throughput nonparametric probability density estimation
Farmer, Jenny
2018-01-01
In high throughput applications, such as those found in bioinformatics and finance, it is important to determine accurate probability distribution functions despite only minimal information about data characteristics, and without using human subjectivity. Such an automated process for univariate data is implemented to achieve this goal by merging the maximum entropy method with single order statistics and maximum likelihood. The only required properties of the random variables are that they are continuous and that they are, or can be approximated as, independent and identically distributed. A quasi-log-likelihood function based on single order statistics for sampled uniform random data is used to empirically construct a sample size invariant universal scoring function. Then a probability density estimate is determined by iteratively improving trial cumulative distribution functions, where better estimates are quantified by the scoring function that identifies atypical fluctuations. This criterion resists under and over fitting data as an alternative to employing the Bayesian or Akaike information criterion. Multiple estimates for the probability density reflect uncertainties due to statistical fluctuations in random samples. Scaled quantile residual plots are also introduced as an effective diagnostic to visualize the quality of the estimated probability densities. Benchmark tests show that estimates for the probability density function (PDF) converge to the true PDF as sample size increases on particularly difficult test probability densities that include cases with discontinuities, multi-resolution scales, heavy tails, and singularities. These results indicate the method has general applicability for high throughput statistical inference. PMID:29750803
Choi, Ji Yeh; Hwang, Heungsun; Yamamoto, Michio; Jung, Kwanghee; Woodward, Todd S
2017-06-01
Functional principal component analysis (FPCA) and functional multiple-set canonical correlation analysis (FMCCA) are data reduction techniques for functional data that are collected in the form of smooth curves or functions over a continuum such as time or space. In FPCA, low-dimensional components are extracted from a single functional dataset such that they explain the most variance of the dataset, whereas in FMCCA, low-dimensional components are obtained from each of multiple functional datasets in such a way that the associations among the components are maximized across the different sets. In this paper, we propose a unified approach to FPCA and FMCCA. The proposed approach subsumes both techniques as special cases. Furthermore, it permits a compromise between the techniques, such that components are obtained from each set of functional data to maximize their associations across different datasets, while accounting for the variance of the data well. We propose a single optimization criterion for the proposed approach, and develop an alternating regularized least squares algorithm to minimize the criterion in combination with basis function approximations to functions. We conduct a simulation study to investigate the performance of the proposed approach based on synthetic data. We also apply the approach for the analysis of multiple-subject functional magnetic resonance imaging data to obtain low-dimensional components of blood-oxygen level-dependent signal changes of the brain over time, which are highly correlated across the subjects as well as representative of the data. The extracted components are used to identify networks of neural activity that are commonly activated across the subjects while carrying out a working memory task.
Scarneciu, Camelia C; Sangeorzan, Livia; Rus, Horatiu; Scarneciu, Vlad D; Varciu, Mihai S; Andreescu, Oana; Scarneciu, Ioan
2017-01-01
This study aimed at assessing the incidence of pulmonary hypertension (PH) at newly diagnosed hyperthyroid patients and at finding a simple model showing the complex functional relation between pulmonary hypertension in hyperthyroidism and the factors causing it. The 53 hyperthyroid patients (H-group) were evaluated mainly by using an echocardiographical method and compared with 35 euthyroid (E-group) and 25 healthy people (C-group). In order to identify the factors causing pulmonary hypertension the statistical method of comparing the values of arithmetical means is used. The functional relation between the two random variables (PAPs and each of the factors determining it within our research study) can be expressed by linear or non-linear function. By applying the linear regression method described by a first-degree equation the line of regression (linear model) has been determined; by applying the non-linear regression method described by a second degree equation, a parabola-type curve of regression (non-linear or polynomial model) has been determined. We made the comparison and the validation of these two models by calculating the determination coefficient (criterion 1), the comparison of residuals (criterion 2), application of AIC criterion (criterion 3) and use of F-test (criterion 4). From the H-group, 47% have pulmonary hypertension completely reversible when obtaining euthyroidism. The factors causing pulmonary hypertension were identified: previously known- level of free thyroxin, pulmonary vascular resistance, cardiac output; new factors identified in this study- pretreatment period, age, systolic blood pressure. According to the four criteria and to the clinical judgment, we consider that the polynomial model (graphically parabola- type) is better than the linear one. The better model showing the functional relation between the pulmonary hypertension in hyperthyroidism and the factors identified in this study is given by a polynomial equation of second degree where the parabola is its graphical representation.
Possible Observational Criteria for Distinguishing Brown Dwarfs From Planets
NASA Technical Reports Server (NTRS)
Black, David C.
1997-01-01
The difference in formation process between binary stars and planetary systems is reflected in their composition, as well as orbital architecture, particularly in their orbital eccentricity as a function of orbital period. It is suggested here that this difference can be used as an observational criterion to distinguish between brown dwarfs and planets. Application of the orbital criterion suggests that, with three possible exceptions, all of the recently discovered substellar companions may be brown dwarfs and not planets. These criterion may be used as a guide for interpretation of the nature of substellar-mass companions to stars in the future.
3-D Mixed Mode Delamination Fracture Criteria - An Experimentalist's Perspective
NASA Technical Reports Server (NTRS)
Reeder, James R.
2006-01-01
Many delamination failure criteria based on fracture toughness have been suggested over the past few decades, but most only covered the region containing mode I and mode II components of loading because that is where toughness data existed. With new analysis tools, more 3D analyses are being conducted that capture a mode III component of loading. This has increased the need for a fracture criterion that incorporates mode III loading. The introduction of a pure mode III fracture toughness test has also produced data on which to base a full 3D fracture criterion. In this paper, a new framework for visualizing 3D fracture criteria is introduced. The common 2D power law fracture criterion was evaluated to produce unexpected predictions with the introduction of mode III and did not perform well in the critical high mode I region. Another 2D criterion that has been shown to model a wide range of materials well was used as the basis for a new 3D criterion. The new criterion is based on assumptions that the relationship between mode I and mode III toughness is similar to the relation between mode I and mode II and that a linear interpolation can be used between mode II and mode III. Until mixed-mode data exists with a mode III component of loading, 3D fracture criteria cannot be properly evaluated, but these assumptions seem reasonable.
NASA Astrophysics Data System (ADS)
Fedi, M.; Florio, G.; Cascone, L.
2012-01-01
We use a multiscale approach as a semi-automated interpreting tool of potential fields. The depth to the source and the structural index are estimated in two steps: first the depth to the source, as the intersection of the field ridges (lines built joining the extrema of the field at various altitudes) and secondly, the structural index by the scale function. We introduce a new criterion, called 'ridge consistency' in this strategy. The criterion is based on the principle that the structural index estimations on all the ridges converging towards the same source should be consistent. If these estimates are significantly different, field differentiation is used to lessen the interference effects from nearby sources or regional fields, to obtain a consistent set of estimates. In our multiscale framework, vertical differentiation is naturally joint to the low-pass filtering properties of the upward continuation, so is a stable process. Before applying our criterion, we studied carefully the errors on upward continuation caused by the finite size of the survey area. To this end, we analysed the complex magnetic synthetic case, known as Bishop model, and evaluated the best extrapolation algorithm and the optimal width of the area extension, needed to obtain accurate upward continuation. Afterwards, we applied the method to the depth estimation of the whole Bishop basement bathymetry. The result is a good reconstruction of the complex basement and of the shape properties of the source at the estimated points.
Study on the criterion to determine the bottom deployment modes of a coilable mast
NASA Astrophysics Data System (ADS)
Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng
2017-12-01
A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.
A Criterion-Referenced Viewpoint on Standards/Cutscores in Language Testing.
ERIC Educational Resources Information Center
Davidson, Fred; Lynch, Brian K.
"Standard" is distinguished from "criterion" as it is used in criterion-referenced testing. The former is argued to refer to the real-world cutpoint at which a decision is made based on a test's result (e.g., exemption from a special training program). The latter is a skill or set of skills to which a test is referenced.…
Criterion-Related Validity of the TOEFL iBT Listening Section. TOEFL iBT Research Report. RR-09-02
ERIC Educational Resources Information Center
Sawaki, Yasuyo; Nissan, Susan
2009-01-01
The study investigated the criterion-related validity of the "Test of English as a Foreign Language"[TM] Internet-based test (TOEFL[R] iBT) Listening section by examining its relationship to a criterion measure designed to reflect language-use tasks that university students encounter in everyday academic life: listening to academic…
ERIC Educational Resources Information Center
Bödeker, Malte; Bucksch, Jens; Wallmann-Sperlich, Birgit
2018-01-01
The Neighborhood Physical Activity Questionnaire allows to assess physical activity within and outside the neighborhood. Study objectives were to examine the criterion-related validity and health/functioning associations of Neighborhood Physical Activity Questionnaire-derived physical activity in German older adults. A total of 107 adults aged…
The use of LANDSAT digital data to detect and monitor vegetation water deficiencies. [South Dakota
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A.
1977-01-01
A technique devised using a vector transformation of LANDSAT digital data to indicate when vegetation is undergoing moisture stress is described. A relation established between the remote sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index) is discussed.
NASA Astrophysics Data System (ADS)
Hirata, Hiroshi; Itoh, Toshiharu; Hosokawa, Kouichi; Deng, Yuanmu; Susaki, Hitoshi
2005-08-01
This article describes a systematic method for determining the cutoff frequency of the low-pass window function that is used for deconvolution in two-dimensional continuous-wave electron paramagnetic resonance (EPR) imaging. An evaluation function for the criterion used to select the cutoff frequency is proposed, and is the product of the effective width of the point spread function for a localized point signal and the noise amplitude of a resultant EPR image. The present method was applied to EPR imaging for a phantom, and the result of cutoff frequency selection was compared with that based on a previously reported method for the same projection data set. The evaluation function has a global minimum point that gives the appropriate cutoff frequency. Images with reasonably good resolution and noise suppression can be obtained from projections with an automatically selected cutoff frequency based on the present method.
Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.
2011-01-01
AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028
Bayesian image reconstruction - The pixon and optimal image modeling
NASA Technical Reports Server (NTRS)
Pina, R. K.; Puetter, R. C.
1993-01-01
In this paper we describe the optimal image model, maximum residual likelihood method (OptMRL) for image reconstruction. OptMRL is a Bayesian image reconstruction technique for removing point-spread function blurring. OptMRL uses both a goodness-of-fit criterion (GOF) and an 'image prior', i.e., a function which quantifies the a priori probability of the image. Unlike standard maximum entropy methods, which typically reconstruct the image on the data pixel grid, OptMRL varies the image model in order to find the optimal functional basis with which to represent the image. We show how an optimal basis for image representation can be selected and in doing so, develop the concept of the 'pixon' which is a generalized image cell from which this basis is constructed. By allowing both the image and the image representation to be variable, the OptMRL method greatly increases the volume of solution space over which the image is optimized. Hence the likelihood of the final reconstructed image is greatly increased. For the goodness-of-fit criterion, OptMRL uses the maximum residual likelihood probability distribution introduced previously by Pina and Puetter (1992). This GOF probability distribution, which is based on the spatial autocorrelation of the residuals, has the advantage that it ensures spatially uncorrelated image reconstruction residuals.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Brown, James, E-mail: 9jhb3@queensu.ca; Carrington, Tucker, E-mail: Tucker.Carrington@queensu.ca
In this paper we show that it is possible to use an iterative eigensolver in conjunction with Halverson and Poirier’s symmetrized Gaussian (SG) basis [T. Halverson and B. Poirier, J. Chem. Phys. 137, 224101 (2012)] to compute accurate vibrational energy levels of molecules with as many as five atoms. This is done, without storing and manipulating large matrices, by solving a regular eigenvalue problem that makes it possible to exploit direct-product structure. These ideas are combined with a new procedure for selecting which basis functions to use. The SG basis we work with is orders of magnitude smaller than themore » basis made by using a classical energy criterion. We find significant convergence errors in previous calculations with SG bases. For sum-of-product Hamiltonians, SG bases large enough to compute accurate levels are orders of magnitude larger than even simple pruned bases composed of products of harmonic oscillator functions.« less
Criterion learning in rule-based categorization: Simulation of neural mechanism and new data
Helie, Sebastien; Ell, Shawn W.; Filoteo, J. Vincent; Maddox, W. Todd
2015-01-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g, categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define ‘long’ and ‘short’). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL’s implications for future research on rule learning. PMID:25682349
Criterion learning in rule-based categorization: simulation of neural mechanism and new data.
Helie, Sebastien; Ell, Shawn W; Filoteo, J Vincent; Maddox, W Todd
2015-04-01
In perceptual categorization, rule selection consists of selecting one or several stimulus-dimensions to be used to categorize the stimuli (e.g., categorize lines according to their length). Once a rule has been selected, criterion learning consists of defining how stimuli will be grouped using the selected dimension(s) (e.g., if the selected rule is line length, define 'long' and 'short'). Very little is known about the neuroscience of criterion learning, and most existing computational models do not provide a biological mechanism for this process. In this article, we introduce a new model of rule learning called Heterosynaptic Inhibitory Criterion Learning (HICL). HICL includes a biologically-based explanation of criterion learning, and we use new category-learning data to test key aspects of the model. In HICL, rule selective cells in prefrontal cortex modulate stimulus-response associations using pre-synaptic inhibition. Criterion learning is implemented by a new type of heterosynaptic error-driven Hebbian learning at inhibitory synapses that uses feedback to drive cell activation above/below thresholds representing ionic gating mechanisms. The model is used to account for new human categorization data from two experiments showing that: (1) changing rule criterion on a given dimension is easier if irrelevant dimensions are also changing (Experiment 1), and (2) showing that changing the relevant rule dimension and learning a new criterion is more difficult, but also facilitated by a change in the irrelevant dimension (Experiment 2). We conclude with a discussion of some of HICL's implications for future research on rule learning. Copyright © 2015 Elsevier Inc. All rights reserved.
CA-125 AUC as a predictor for epithelial ovarian cancer relapse.
Mano, António; Falcão, Amílcar; Godinho, Isabel; Santos, Jorge; Leitão, Fátima; de Oliveira, Carlos; Caramona, Margarida
2008-01-01
The aim of the present work was to evaluate the usefulness of CA-125 normalized in time area under the curve (CA-125 AUC) to signalise epithelial ovarian cancer relapse. Data from a hundred and eleven patients were submitted to two different approaches based on CA-125 AUC increase values to predict patient relapse. In Criterion A total CA-125 AUC normalized in time value (AUC(i)) was compared with the immediately previous one (AUC(i-1)) using the formulae AUC(i) > or = F * AUC(i-1) (several F values were tested) to find the appropriate close related increment associated to patient relapse. In Criterion B total CA-125 AUC normalised in time was calculated and several cut-off values were correlated with patient relapse prediction capacity. In Criterion A the best accuracy was achieved with a factor (F) of 1.25 (increment of 25% from the previous status), while in Criterion B the best accuracies were achieved with cut-offs of 25, 50, 75 and 100 IU/mL. The mean lead time to relapse achieved with Criterion A was 181 days, while with Criterion B they were, respectively, 131, 111, 63 and 11 days. Based on our results we believe that conjugation and sequential application of both criteria in patient relapse detection should be highly advisable. CA-125 AUC rapid burst in asymptomatic patients should be firstly evaluated using Criterion A with a high accuracy (0.85) and with a substantial mean lead time to relapse (181 days). If a negative answer was obtained then Criterion B should performed to confirm the absence of relapse.
[GSH fermentation process modeling using entropy-criterion based RBF neural network model].
Tan, Zuoping; Wang, Shitong; Deng, Zhaohong; Du, Guocheng
2008-05-01
The prediction accuracy and generalization of GSH fermentation process modeling are often deteriorated by noise existing in the corresponding experimental data. In order to avoid this problem, we present a novel RBF neural network modeling approach based on entropy criterion. It considers the whole distribution structure of the training data set in the parameter learning process compared with the traditional MSE-criterion based parameter learning, and thus effectively avoids the weak generalization and over-learning. Then the proposed approach is applied to the GSH fermentation process modeling. Our results demonstrate that this proposed method has better prediction accuracy, generalization and robustness such that it offers a potential application merit for the GSH fermentation process modeling.
Four-Dimensional Golden Search
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fenimore, Edward E.
2015-02-25
The Golden search technique is a method to search a multiple-dimension space to find the minimum. It basically subdivides the possible ranges of parameters until it brackets, to within an arbitrarily small distance, the minimum. It has the advantages that (1) the function to be minimized can be non-linear, (2) it does not require derivatives of the function, (3) the convergence criterion does not depend on the magnitude of the function. Thus, if the function is a goodness of fit parameter such as chi-square, the convergence does not depend on the noise being correctly estimated or the function correctly followingmore » the chi-square statistic. And, (4) the convergence criterion does not depend on the shape of the function. Thus, long shallow surfaces can be searched without the problem of premature convergence. As with many methods, the Golden search technique can be confused by surfaces with multiple minima.« less
Lin, Keh-chung; Chen, Hui-fang; Chen, Chia-ling; Wang, Tien-ni; Wu, Ching-yi; Hsieh, Yu-wei; Wu, Li-ling
2012-01-01
This study examined criterion-related validity and clinimetric properties of the Pediatric Motor Activity Log (PMAL) in children with cerebral palsy. Study participants were 41 children (age range: 28-113 months) and their parents. Criterion-related validity was evaluated by the associations between the PMAL and criterion measures at baseline and posttreatment, including the self-care, mobility, and cognition subscale, the total performance of the Functional Independence Measure in children (WeeFIM), and the grasping and visual-motor integration of the Peabody Developmental Motor Scales. Pearson correlation coefficients were calculated. Responsiveness was examined using the paired t test and the standardized response mean, the minimal detectable change was captured at the 90% confidence level, and the minimal clinically important change was estimated using anchor-based and distribution-based approaches. The PMAL-QOM showed fair concurrent validity at pretreatment and posttreatment and predictive validity, whereas the PMAL-AOU had fair concurrent validity at posttreatment only. The PMAL-AOU and PMAL-QOM were both markedly responsive to change after treatment. Improvement of at least 0.67 points on the PMAL-AOU and 0.66 points on the PMAL-QOM can be considered as a true change, not measurement error. A mean change has to exceed the range of 0.39-0.94 on the PMAL-AOU and the range of 0.38-0.74 on the PMAL-QOM to be regarded as clinically important change. Copyright © 2011 Elsevier Ltd. All rights reserved.
Sampling in the light of Wigner distribution.
Stern, Adrian; Javidi, Bahram
2004-03-01
We propose a new method for analysis of the sampling and reconstruction conditions of real and complex signals by use of the Wigner domain. It is shown that the Wigner domain may provide a better understanding of the sampling process than the traditional Fourier domain. For example, it explains how certain non-bandlimited complex functions can be sampled and perfectly reconstructed. On the basis of observations in the Wigner domain, we derive a generalization to the Nyquist sampling criterion. By using this criterion, we demonstrate simple preprocessing operations that can adapt a signal that does not fulfill the Nyquist sampling criterion. The preprocessing operations demonstrated can be easily implemented by optical means.
Evaluation of purchase intention of customers in two wheeler automobile segment: AHP and TOPSIS
NASA Astrophysics Data System (ADS)
Sri Yogi, Kottala
2018-03-01
Winning heart of customers is preeminent main design of any business organization in global business environment. This paper explored customer’s priorities while purchasing a two wheeler automobile segment using Analytical Hierarchy Process (AHP) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS) as a multi criteria decision making tools to accomplish the research objectives. Study has been done to analyze different criteria to be considered during purchasing of two wheeler automobiles among respondents using structured questionnaire based on SAATY scale. Based on our previous work on empirical & fuzzy logic approach to product quality and purchase intention of customers in two wheeler- operational, performance, economic, brand value and maintenance aspects are considered as decision criteria of customers while purchasing a two wheeler. The study suggests high pick up during overtaking, petrol saving, reasonable spare parts price, unique in design and identity and easy to change gear as main criterion in purchasing process. We also found some leading two wheeler automobiles models available in Indian market using some objective function criterion in choosing some important characteristics like price, cylinder capacity, brake horse power and weight during purchasing process of two wheeler automobile in Indian market based on respondents perception.
NASA Astrophysics Data System (ADS)
Cuevas-Maraver, Jesús; Kevrekidis, Panayotis G.; Vainchtein, Anna; Xu, Haitao
2017-09-01
In this work, we provide two complementary perspectives for the (spectral) stability of solitary traveling waves in Hamiltonian nonlinear dynamical lattices, of which the Fermi-Pasta-Ulam and the Toda lattice are prototypical examples. One is as an eigenvalue problem for a stationary solution in a cotraveling frame, while the other is as a periodic orbit modulo shifts. We connect the eigenvalues of the former with the Floquet multipliers of the latter and using this formulation derive an energy-based spectral stability criterion. It states that a sufficient (but not necessary) condition for a change in the wave stability occurs when the functional dependence of the energy (Hamiltonian) H of the model on the wave velocity c changes its monotonicity. Moreover, near the critical velocity where the change of stability occurs, we provide an explicit leading-order computation of the unstable eigenvalues, based on the second derivative of the Hamiltonian H''(c0) evaluated at the critical velocity c0. We corroborate this conclusion with a series of analytically and numerically tractable examples and discuss its parallels with a recent energy-based criterion for the stability of discrete breathers.
Yeh-Stratton Criterion for Stress Concentrations on Fiber-Reinforced Composite Materials
NASA Technical Reports Server (NTRS)
Yeh, Hsien-Yang; Richards, W. Lance
1996-01-01
This study investigated the Yeh-Stratton Failure Criterion with the stress concentrations on fiber-reinforced composites materials under tensile stresses. The Yeh-Stratton Failure Criterion was developed from the initial yielding of materials based on macromechanics. To investigate this criterion, the influence of the materials anisotropic properties and far field loading on the composite materials with central hole and normal crack were studied. Special emphasis was placed on defining the crack tip stress fields and their applications. The study of Yeh-Stratton criterion for damage zone stress fields on fiber-reinforced composites under tensile loading was compared with several fracture criteria; Tsai-Wu Theory, Hoffman Theory, Fischer Theory, and Cowin Theory. Theoretical predictions from these criteria are examined using experimental results.
The use of Landsat digital data to detect and monitor vegetation water deficiencies
NASA Technical Reports Server (NTRS)
Thompson, D. R.; Wehmanen, O. A.
1977-01-01
In the Large Area Crop Inventory Experiment a technique was devised using a vector transformation of Landsat digital data to indicate when vegetation is undergoing moisture stress. A relation was established between the remote-sensing-based criterion (the Green Index Number) and a ground-based criterion (Crop Moisture Index).
Meta-Analysis of Criterion Validity for Curriculum-Based Measurement in Written Language
ERIC Educational Resources Information Center
Romig, John Elwood; Therrien, William J.; Lloyd, John W.
2017-01-01
We used meta-analysis to examine the criterion validity of four scoring procedures used in curriculum-based measurement of written language. A total of 22 articles representing 21 studies (N = 21) met the inclusion criteria. Results indicated that two scoring procedures, correct word sequences and correct minus incorrect sequences, have acceptable…
The free growth criterion for grain initiation in TiB 2 inoculated γ-titanium aluminide based alloys
NASA Astrophysics Data System (ADS)
Gosslar, D.; Günther, R.
2014-02-01
γ-titanium aluminide (γ-TiAl) based alloys enable for the design of light-weight and high-temperature resistant engine components. This work centers on a numerical study of the condition for grain initiation during solidification of TiB2 inoculated γ-TiAl based alloys. Grain initiation is treated according to the so-called free growth criterion. This means that the free growth barrier for grain initiation is determined by the maximum interfacial mean curvature between a nucleus and the melt. The strategy presented in this paper relies on iteratively increasing the volume of a nucleus, which partially wets a hexagonal TiB2 crystal, minimizing the interfacial energy and calculating the corresponding interfacial curvature. The hereby obtained maximum curvature yields a scaling relation between the size of TiB2 crystals and the free growth barrier. Comparison to a prototypical TiB2 crystal in an as cast γ-TiAl based alloy allowed then to predict the free growth barrier prevailing under experimental conditions. The validity of the free growth criterion is discussed by an interfacial energy criterion.
Physical mechanism and numerical simulation of the inception of the lightning upward leader
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li Qingmin; Lu Xinchang; Shi Wei
2012-12-15
The upward leader is a key physical process of the leader progression model of lightning shielding. The inception mechanism and criterion of the upward leader need further understanding and clarification. Based on leader discharge theory, this paper proposes the critical electric field intensity of the stable upward leader (CEFISUL) and characterizes it by the valve electric field intensity on the conductor surface, E{sub L}, which is the basis of a new inception criterion for the upward leader. Through numerical simulation under various physical conditions, we verified that E{sub L} is mainly related to the conductor radius, and data fitting yieldsmore » the mathematical expression of E{sub L}. We further establish a computational model for lightning shielding performance of the transmission lines based on the proposed CEFISUL criterion, which reproduces the shielding failure rate of typical UHV transmission lines. The model-based calculation results agree well with the statistical data from on-site operations, which show the effectiveness and validity of the CEFISUL criterion.« less
Multi-object detection and tracking technology based on hexagonal opto-electronic detector
NASA Astrophysics Data System (ADS)
Song, Yong; Hao, Qun; Li, Xiang
2008-02-01
A novel multi-object detection and tracking technology based on hexagonal opto-electronic detector is proposed, in which (1) a new hexagonal detector, which is composed of 6 linear CCDs, has been firstly developed to achieve the field of view of 360 degree, (2) to achieve the detection and tracking of multi-object with high speed, the object recognition criterions of Object Signal Width Criterion (OSWC) and Horizontal Scale Ratio Criterion (HSRC) are proposed. In this paper, Simulated Experiments have been carried out to verify the validity of the proposed technology, which show that the detection and tracking of multi-object can be achieved with high speed by using the proposed hexagonal detector and the criterions of OSWC and HSRC, indicating that the technology offers significant advantages in Photo-electric Detection, Computer Vision, Virtual Reality, Augment Reality, etc.
Schiffman, Eric L; Truelove, Edmond L; Ohrbach, Richard; Anderson, Gary C; John, Mike T; List, Thomas; Look, John O
2010-01-01
The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. The aim of this article is to provide an overview of the project's methodology, descriptive statistics, and data for the study participant sample. This article also details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. The Axis I reference standards were based on the consensus of two criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion examination reliability was also assessed within study sites. Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas > or = 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion examiner agreement with reference standards was excellent (k > or = 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods.
Optimizing phonon space in the phonon-coupling model
NASA Astrophysics Data System (ADS)
Tselyaev, V.; Lyutorovich, N.; Speth, J.; Reinhard, P.-G.
2017-08-01
We present a new scheme to select the most relevant phonons in the phonon-coupling model, named here the time-blocking approximation (TBA). The new criterion, based on the phonon-nucleon coupling strengths rather than on B (E L ) values, is more selective and thus produces much smaller phonon spaces in the TBA. This is beneficial in two respects: first, it curbs the computational cost, and second, it reduces the danger of double counting in the expansion basis of the TBA. We use here the TBA in a form where the coupling strength is regularized to keep the given Hartree-Fock ground state stable. The scheme is implemented in a random-phase approximation and TBA code based on the Skyrme energy functional. We first explore carefully the cutoff dependence with the new criterion and can work out a natural (optimal) cutoff parameter. Then we use the freshly developed and tested scheme for a survey of giant resonances and low-lying collective states in six doubly magic nuclei looking also at the dependence of the results when varying the Skyrme parametrization.
Lutchen, K R
1990-08-01
A sensitivity analysis based on weighted least-squares regression is presented to evaluate alternative methods for fitting lumped-parameter models to respiratory impedance data. The goal is to maintain parameter accuracy simultaneously with practical experiment design. The analysis focuses on predicting parameter uncertainties using a linearized approximation for joint confidence regions. Applications are with four-element parallel and viscoelastic models for 0.125- to 4-Hz data and a six-element model with separate tissue and airway properties for input and transfer impedance data from 2-64 Hz. The criterion function form was evaluated by comparing parameter uncertainties when data are fit as magnitude and phase, dynamic resistance and compliance, or real and imaginary parts of input impedance. The proper choice of weighting can make all three criterion variables comparable. For the six-element model, parameter uncertainties were predicted when both input impedance and transfer impedance are acquired and fit simultaneously. A fit to both data sets from 4 to 64 Hz could reduce parameter estimate uncertainties considerably from those achievable by fitting either alone. For the four-element models, use of an independent, but noisy, measure of static compliance was assessed as a constraint on model parameters. This may allow acceptable parameter uncertainties for a minimum frequency of 0.275-0.375 Hz rather than 0.125 Hz. This reduces data acquisition requirements from a 16- to a 5.33- to 8-s breath holding period. These results are approximations, and the impact of using the linearized approximation for the confidence regions is discussed.
NASA Astrophysics Data System (ADS)
Bandte, Oliver
It has always been the intention of systems engineering to invent or produce the best product possible. Many design techniques have been introduced over the course of decades that try to fulfill this intention. Unfortunately, no technique has succeeded in combining multi-criteria decision making with probabilistic design. The design technique developed in this thesis, the Joint Probabilistic Decision Making (JPDM) technique, successfully overcomes this deficiency by generating a multivariate probability distribution that serves in conjunction with a criterion value range of interest as a universally applicable objective function for multi-criteria optimization and product selection. This new objective function constitutes a meaningful Xnetric, called Probability of Success (POS), that allows the customer or designer to make a decision based on the chance of satisfying the customer's goals. In order to incorporate a joint probabilistic formulation into the systems design process, two algorithms are created that allow for an easy implementation into a numerical design framework: the (multivariate) Empirical Distribution Function and the Joint Probability Model. The Empirical Distribution Function estimates the probability that an event occurred by counting how many times it occurred in a given sample. The Joint Probability Model on the other hand is an analytical parametric model for the multivariate joint probability. It is comprised of the product of the univariate criterion distributions, generated by the traditional probabilistic design process, multiplied with a correlation function that is based on available correlation information between pairs of random variables. JPDM is an excellent tool for multi-objective optimization and product selection, because of its ability to transform disparate objectives into a single figure of merit, the likelihood of successfully meeting all goals or POS. The advantage of JPDM over other multi-criteria decision making techniques is that POS constitutes a single optimizable function or metric that enables a comparison of all alternative solutions on an equal basis. Hence, POS allows for the use of any standard single-objective optimization technique available and simplifies a complex multi-criteria selection problem into a simple ordering problem, where the solution with the highest POS is best. By distinguishing between controllable and uncontrollable variables in the design process, JPDM can account for the uncertain values of the uncontrollable variables that are inherent to the design problem, while facilitating an easy adjustment of the controllable ones to achieve the highest possible POS. Finally, JPDM's superiority over current multi-criteria decision making techniques is demonstrated with an optimization of a supersonic transport concept and ten contrived equations as well as a product selection example, determining an airline's best choice among Boeing's B-747, B-777, Airbus' A340, and a Supersonic Transport. The optimization examples demonstrate JPDM's ability to produce a better solution with a higher POS than an Overall Evaluation Criterion or Goal Programming approach. Similarly, the product selection example demonstrates JPDM's ability to produce a better solution with a higher POS and different ranking than the Overall Evaluation Criterion or Technique for Order Preferences by Similarity to the Ideal Solution (TOPSIS) approach.
Equilibrium properties of simple metal thin films in the self-compressed stabilized jellium model.
Mahmoodi, T; Payami, M
2009-07-01
In this work, we have applied the self-compressed stabilized jellium model to predict the equilibrium properties of isolated thin Al, Na and Cs slabs. To make a direct correspondence to atomic slabs, we have considered only those L values that correspond to n-layered atomic slabs with 2≤n≤20, for surface indices (100), (110), and (111). The calculations are based on the density functional theory and self-consistent solution of the Kohn-Sham equations in the local density approximation. Our results show that firstly, the quantum size effects are significant for slabs with sizes smaller than or near to the Fermi wavelength of the valence electrons λ(F), and secondly, some slabs expand while others contract with respect to the bulk spacings. Based on the results, we propose a criterion for realization of significant quantum size effects that lead to expansion of some thin slabs. For more justification of the criterion, we have tested it on Li slabs for 2≤n≤6. We have compared our Al results with those obtained from using all-electron or pseudo-potential first-principles calculations. This comparison shows excellent agreements for Al(100) work functions, and qualitatively good agreements for the other work functions and surface energies. These agreements justify the way we have used the self-compressed stabilized jellium model for the correct description of the properties of simple metal slab systems. On the other hand, our results for the work functions and surface energies of large- n slabs are in good agreement with those obtained from applying the stabilized jellium model for semi-infinite systems. In addition, we have performed the slab calculations in the presence of surface corrugation for selected Al slabs and have shown that the results are worsened.
Optimum design of bolted composite lap joints under mechanical and thermal loading
NASA Astrophysics Data System (ADS)
Kradinov, Vladimir Yurievich
A new approach is developed for the analysis and design of mechanically fastened composite lap joints under mechanical and thermal loading. Based on the combined complex potential and variational formulation, the solution method satisfies the equilibrium equations exactly while the boundary conditions are satisfied by minimizing the total potential. This approach is capable of modeling finite laminate planform dimensions, uniform and variable laminate thickness, laminate lay-up, interaction among bolts, bolt torque, bolt flexibility, bolt size, bolt-hole clearance and interference, insert dimensions and insert material properties. Comparing to the finite element analysis, the robustness of the method does not decrease when modeling the interaction of many bolts; also, the method is more suitable for parametric study and design optimization. The Genetic Algorithm (GA), a powerful optimization technique for multiple extrema functions in multiple dimensions search spaces, is applied in conjunction with the complex potential and variational formulation to achieve optimum designs of bolted composite lap joints. The objective of the optimization is to acquire such a design that ensures the highest strength of the joint. The fitness function for the GA optimization is based on the average stress failure criterion predicting net-section, shear-out, and bearing failure modes in bolted lap joints. The criterion accounts for the stress distribution in the thickness direction at the bolt location by applying an approach utilizing a beam on an elastic foundation formulation.
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
Meisters, Julia; Diedenhofen, Birk; Musch, Jochen
2018-04-20
For decades, sequential lineups have been considered superior to simultaneous lineups in the context of eyewitness identification. However, most of the research leading to this conclusion was based on the analysis of diagnosticity ratios that do not control for the respondent's response criterion. Recent research based on the analysis of ROC curves has found either equal discriminability for sequential and simultaneous lineups, or higher discriminability for simultaneous lineups. Some evidence for potential position effects and for criterion shifts in sequential lineups has also been reported. Using ROC curve analysis, we investigated the effects of the suspect's position on discriminability and response criteria in both simultaneous and sequential lineups. We found that sequential lineups suffered from an unwanted position effect. Respondents employed a strict criterion for the earliest lineup positions, and shifted to a more liberal criterion for later positions. No position effects and no criterion shifts were observed in simultaneous lineups. This result suggests that sequential lineups are not superior to simultaneous lineups, and may give rise to unwanted position effects that have to be considered when conducting police lineups.
Meissner, Christian A; Tredoux, Colin G; Parker, Janat F; MacLin, Otto H
2005-07-01
Many eyewitness researchers have argued for the application of a sequential alternative to the traditional simultaneous lineup, given its role in decreasing false identifications of innocent suspects (sequential superiority effect). However, Ebbesen and Flowe (2002) have recently noted that sequential lineups may merely bring about a shift in response criterion, having no effect on discrimination accuracy. We explored this claim, using a method that allows signal detection theory measures to be collected from eyewitnesses. In three experiments, lineup type was factorially combined with conditions expected to influence response criterion and/or discrimination accuracy. Results were consistent with signal detection theory predictions, including that of a conservative criterion shift with the sequential presentation of lineups. In a fourth experiment, we explored the phenomenological basis for the criterion shift, using the remember-know-guess procedure. In accord with previous research, the criterion shift in sequential lineups was associated with a reduction in familiarity-based responding. It is proposed that the relative similarity between lineup members may create a context in which fluency-based processing is facilitated to a greater extent when lineup members are presented simultaneously.
Irreducible Green's functions method for a quantum dot coupled to metallic and superconducting leads
NASA Astrophysics Data System (ADS)
Górski, Grzegorz; Kucab, Krzysztof
2017-05-01
Using irreducible Green's functions (IGF) method we analyse the Coulomb interaction dependence of the spectral functions and the transport properties of a quantum dot coupled to isotropic superconductor and metallic leads (SC-QD-N). The irreducible Green's functions method is the modification of classical equation of motion technique. The IGF scheme is based on differentiation of double-time Green's functions, both over the primary and secondary times. The IGF method allows to obtain the spectral functions for equilibrium and non-equilibrium impurity Anderson model used for SC-QD-N system. By the numerical computations, we show the change of spectral and the anomalous densities under the influence of the Coulomb interactions. The observed sign change of the anomalous spectral density can be used as the criterion of the SC singlet-Kondo singlet transition.
Salloum, Alison; Scheeringa, Michael S.; Cohen, Judith A.; Storch, Eric A.
2014-01-01
Background In order to develop Stepped Care Trauma-Focused Cognitive Behavioral Therapy (TF-CBT), a definition of early response/non-response is needed to guide decisions about the need for subsequent treatment. Objective The purpose of this article is to (1) establish criterion for defining an early indicator of response/nonresponse to the first step within Stepped Care TF-CBT, and (2) to explore the preliminary clinical utility of the early response/non-response criterion. Method Data from two studies were used: (1) treatment outcome data from a clinical trial in which 17 young children (ages 3 to 6 years) received therapist-directed CBT for children with PTSS were examined to empirically establish the number of posttraumatic stress symptoms to define early treatment response/non-response; and (2) three case examples with young children in Stepped Care TF-CBT were used to explore the utility of the treatment response criterion. Results For defining the responder status criterion, an algorithm of either 3 or fewer PTSS on a clinician-rated measure or being below the clinical cutoff score on a parent-rated measure of childhood PTSS, and being rated as improved, much improved or free of symptoms functioned well for determining whether or not to step up to more intensive treatment. Case examples demonstrated how the criterion were used to guide subsequent treatment, and that responder status criterion after Step One may or may not be aligned with parent preference. Conclusion Although further investigation is needed, the responder status criterion for young children used after Step One of Stepped Care TF-CBT appears promising. PMID:25663796
Validation of radiocarpal joint contact models based on images from a clinical MRI scanner.
Johnson, Joshua E; McIff, Terence E; Lee, Phil; Toby, E Bruce; Fischer, Kenneth J
2014-01-01
This study was undertaken to assess magnetic resonance imaging (MRI)-based radiocarpal surface contact models of functional loading in a clinical MRI scanner for future in vivo studies, by comparison with experimental measures from three cadaver forearm specimens. Experimental data were acquired using a Tekscan sensor during simulated light grasp. Magnetic resonance (MR) images were used to obtain model geometry and kinematics (image registration). Peak contact pressures (PPs) and average contact pressures (APs), contact forces and contact areas were determined in the radiolunate and radioscaphoid joints. Contact area was also measured directly from MR images acquired with load and compared with model data. Based on the validation criteria (within 25% of experimental data), out of the six articulations (three specimens with two articulations each), two met the criterion for AP (0%, 14%); one for peak pressure (20%); one for contact force (5%); four for contact area with respect to experiment (8%, 13%, 19% and 23%), and three contact areas met the criterion with respect to direct measurements (14%, 21% and 21%). Absolute differences between model and experimental PPs were reasonably low (within 2.5 MPa). Overall, the results indicate that MRI-based models generated from 3T clinical MR scanner appear sufficient to obtain clinically relevant data.
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
ERIC Educational Resources Information Center
Livingstone, Holly A.; Day, Arla L.
2005-01-01
Despite the popularity of the concept of emotional intelligence(EI), there is much controversy around its definition, measurement, and validity. Therefore, the authors examined the construct and criterion-related validity of an ability-based EI measure (Mayer Salovey Caruso Emotional Intelligence Test [MSCEIT]) and a mixed-model EI measure…
ERIC Educational Resources Information Center
Tibbetts, Katherine A.; And Others
This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…
Neuropathological diagnostic criteria for Alzheimer's disease.
Murayama, Shigeo; Saito, Yuko
2004-09-01
Neuropathological diagnostic criteria for Alzheimer's disease (AD) are based on tau-related pathology: NFT or neuritic plaques (NP). The Consortium to Establish a Registry for Alzheimer's disease (CERAD) criterion evaluates the highest density of neocortical NP from 0 (none) to C (abundant). Clinical documentation of dementia and NP stage A in younger cases, B in young old cases and C in older cases fulfils the criterion of AD. The CERAD criterion is most frequently used in clinical outcome studies because of its inclusion of clinical information. Braak and Braak's criterion evaluates the density and distribution of NFT and classifies them into: I/II, entorhinal; III/IV, limbic; and V/VI, neocortical stage. These three stages correspond to normal cognition, cognitive impairment and dementia, respectively. As Braak's criterion is based on morphological evaluation of the brain alone, this criterion is usually adopted in the research setting. The National Institute for Aging and Ronald and Nancy Reagan Institute of the Alzheimer's Association criterion combines these two criteria and categorizes cases into NFT V/VI and NP C, NFT III/IV and NP B, and NFT I/II and NP A, corresponding to high, middle and low probability of AD, respectively. As most AD cases in the aged population are categorized into Braak tangle stage IV and CERAD stage C, the usefulness of this criterion has not yet been determined. The combination of Braak's NFT stage equal to or above IV and Braak's senile plaque Stage C provides, arguably, the highest sensitivity and specificity. In future, the criteria should include in vivo dynamic neuropathological data, including 3D MRI, PET scan and CSF biomarkers, as well as more sensitive and specific immunohistochemical and immunochemical grading of AD.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Baumgartner, S.; Bieli, R.; Bergmann, U. C.
2012-07-01
An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an optimized CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This ismore » considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. Emphasis is put on quantifying the statistical distribution of channel bow throughout the core using measurement data. The optimized CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (authors)« less
Criterion-based laparoscopic training reduces total training time.
Brinkman, Willem M; Buzink, Sonja N; Alevizos, Leonidas; de Hingh, Ignace H J T; Jakimowicz, Jack J
2012-04-01
The benefits of criterion-based laparoscopic training over time-oriented training are unclear. The purpose of this study is to compare these types of training based on training outcome and time efficiency. During four training sessions within 1 week (one session per day) 34 medical interns (no laparoscopic experience) practiced on two basic tasks on the Simbionix LAP Mentor virtual-reality (VR) simulator: 'clipping and grasping' and 'cutting'. Group C (criterion-based) (N = 17) trained to reach predefined criteria and stopped training in each session when these criteria were met, with a maximum training time of 1 h. Group T (time-based) (N = 17) trained for a fixed time of 1 h each session. Retention of skills was assessed 1 week after training. In addition, transferability of skills was established using the Haptica ProMIS augmented-reality simulator. Both groups improved their performance significantly over the course of the training sessions (Wilcoxon signed ranks, P < 0.05). Both groups showed skill transferability and skill retention. When comparing the performance parameters of group C and group T, their performances in the first, the last and the retention training sessions did not differ significantly (Mann-Whitney U test, P > 0.05). The average number of repetitions needed to meet the criteria also did not differ between the groups. Overall, group C spent less time training on the simulator than did group T (74:48 and 120:10 min, respectively; P < 0.001). Group C performed significantly fewer repetitions of each task, overall and in session 2, 3 and 4. Criterion-based training of basic laparoscopic skills can reduce the overall training time with no impact on training outcome, transferability or retention of skills. Criterion-based should be the training of choice in laparoscopic skills curricula.
Calibration of DEM parameters on shear test experiments using Kriging method
NASA Astrophysics Data System (ADS)
Bednarek, Xavier; Martin, Sylvain; Ndiaye, Abibatou; Peres, Véronique; Bonnefoy, Olivier
2017-06-01
Calibration of powder mixing simulation using Discrete-Element-Method is still an issue. Achieving good agreement with experimental results is difficult because time-efficient use of DEM involves strong assumptions. This work presents a methodology to calibrate DEM parameters using Efficient Global Optimization (EGO) algorithm based on Kriging interpolation method. Classical shear test experiments are used as calibration experiments. The calibration is made on two parameters - Young modulus and friction coefficient. The determination of the minimal number of grains that has to be used is a critical step. Simulations of a too small amount of grains would indeed not represent the realistic behavior of powder when using huge amout of grains will be strongly time consuming. The optimization goal is the minimization of the objective function which is the distance between simulated and measured behaviors. The EGO algorithm uses the maximization of the Expected Improvement criterion to find next point that has to be simulated. This stochastic criterion handles with the two interpolations made by the Kriging method : prediction of the objective function and estimation of the error made. It is thus able to quantify the improvement in the minimization that new simulations at specified DEM parameters would lead to.
Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard
2008-01-01
Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205
A. Dennis Lemly; Joseph P. Skorupa
2007-01-01
The US Environmental Protection Agency is developing a national water quality criterion for selenium that is based on concentrations of the element in fish tissue. Although this approach offers advantages over the current water-based regulations, it also presents new challenges with respect to implementation. A comprehensive protocol that answers the ââwhat, where, and...
DOE Office of Scientific and Technical Information (OSTI.GOV)
Minjarez-Sosa, J. Adolfo, E-mail: aminjare@gauss.mat.uson.mx; Luque-Vasquez, Fernando
This paper deals with two person zero-sum semi-Markov games with a possibly unbounded payoff function, under a discounted payoff criterion. Assuming that the distribution of the holding times H is unknown for one of the players, we combine suitable methods of statistical estimation of H with control procedures to construct an asymptotically discount optimal pair of strategies.
S. McAllister; M. Finney; J. Cohen
2011-01-01
Extreme weather often contributes to crown fires, where the fire spreads from one tree crown to the next as a series of piloted ignitions. An important aspect in predicting crown fires is understanding the ignition of fuel particles. The ignition criterion considered in this work is the critical mass flux criterion - that a sufficient amount of pyrolysis gases must be...
Critical mass flux for flaming ignition of dead, dry wood as a function of exernal radiant heat flux
Sara McAllister; Mark Finney; Jack Cohen
2010-01-01
Extreme weather often contributes to crown fires, where the fire spreads from one tree crown to the next as a series of piloted ignitions. An important aspect in predicting crown fires is understanding the ignition of fuel particles. The ignition criterion considered in this work is the critical mass flux criterion - that a sufficient amount of pyrolysis gases must be...
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
NASA Astrophysics Data System (ADS)
Ning, Fangkun; Jia, Weitao; Hou, Jian; Chen, Xingrui; Le, Qichi
2018-05-01
Various fracture criteria, especially Johnson and Cook (J-C) model and (normalized) Cockcroft and Latham (C-L) criterion were contrasted and discussed. Based on normalized C-L criterion, adopted in this paper, FE simulation was carried out and hot rolling experiments under temperature range of 200 °C–350 °C, rolling reduction rate of 25%–40% and rolling speed from 7–21 r/min was implemented. The microstructure was observed by optical microscope and damage values of simulation results were contrasted with the length of cracks on diverse parameters. The results show that the plate generated less edge cracks and the microstructure emerged slight shear bands and fine dynamic recrystallization grains rolled at 350 °C, 40% reduction and 14 r/min. The edge cracks pre-criterion model was obtained combined with Zener-Hollomon equation and deformation activation energy.
Analyses of S-Box in Image Encryption Applications Based on Fuzzy Decision Making Criterion
NASA Astrophysics Data System (ADS)
Rehman, Inayatur; Shah, Tariq; Hussain, Iqtadar
2014-06-01
In this manuscript, we put forward a standard based on fuzzy decision making criterion to examine the current substitution boxes and study their strengths and weaknesses in order to decide their appropriateness in image encryption applications. The proposed standard utilizes the results of correlation analysis, entropy analysis, contrast analysis, homogeneity analysis, energy analysis, and mean of absolute deviation analysis. These analyses are applied to well-known substitution boxes. The outcome of these analyses are additional observed and a fuzzy soft set decision making criterion is used to decide the suitability of an S-box to image encryption applications.
The evolution of the intergalactic medium and the origin of the galaxy luminosity function
NASA Technical Reports Server (NTRS)
Valls-Gabaud, David; Blanchard, Alain; Mamon, Gary
1993-01-01
The coupling of the Press and Schechter prescription with the CDM scenario and the Hoyle-Rees-Ostriker cooling criterion leads to a galaxy formation scenario in which galaxies are overproduced by a large factor. Although star formation might be suppressed in the smaller halos, a large amount of energy per galactic mass is needed to account for the present number density of galaxies. The evolution of the intergalactic medium (IGM) provides a simple criterion to prevent galaxy formation without requiring feedback, since halos with small virial temperatures are not able to retain the infalling hot gas of the IGM. If the ionizing background has decreased since z is approximately 1 - 2, then this criterion explains the slope of the luminosity function at the faint end. In addition, this scenario predicts two populations of dwarf galaxies, well differentiated in age, gas content, stellar populations, and clustering properties, which can be identified with dE and dIm galaxies.
Energy Approach-Based Simulation of Structural Materials High-Cycle Fatigue
NASA Astrophysics Data System (ADS)
Balayev, A. F.; Korolev, A. V.; Kochetkov, A. V.; Sklyarova, A. I.; Zakharov, O. V.
2016-02-01
The paper describes the mechanism of micro-cracks development in solid structural materials based on the theory of brittle fracture. A probability function of material cracks energy distribution is obtained using a probabilistic approach. The paper states energy conditions for cracks growth at material high-cycle loading. A formula allowing to calculate the amount of energy absorbed during the cracks growth is given. The paper proposes a high- cycle fatigue evaluation criterion allowing to determine the maximum permissible number of solid body loading cycles, at which micro-cracks start growing rapidly up to destruction.
Li, Yun
2017-01-01
We addressed the fusion estimation problem for nonlinear multisensory systems. Based on the Gauss–Hermite approximation and weighted least square criterion, an augmented high-dimension measurement from all sensors was compressed into a lower dimension. By combining the low-dimension measurement function with the particle filter (PF), a weighted measurement fusion PF (WMF-PF) is presented. The accuracy of WMF-PF appears good and has a lower computational cost when compared to centralized fusion PF (CF-PF). An example is given to show the effectiveness of the proposed algorithms. PMID:28956862
New Stopping Criteria for Segmenting DNA Sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wentian
2001-06-18
We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less
Why is auditory frequency weighting so important in regulation of underwater noise?
Tougaard, Jakob; Dähne, Michael
2017-10-01
A key question related to regulating noise from pile driving, air guns, and sonars is how to take into account the hearing abilities of different animals by means of auditory frequency weighting. Recordings of pile driving sounds, both in the presence and absence of a bubble curtain, were evaluated against recent thresholds for temporary threshold shift (TTS) for harbor porpoises by means of four different weighting functions. The assessed effectivity, expressed as time until TTS, depended strongly on choice of weighting function: 2 orders of magnitude larger for an audiogram-weighted TTS criterion relative to an unweighted criterion, highlighting the importance of selecting the right frequency weighting.
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan; ...
2018-03-05
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Garcia, Darren J.; Skadberg, Rebecca M.; Schmidt, Megan
The Diagnostic and Statistical Manual of Mental Disorders (5th ed. [DSM–5]; American Psychiatric Association, 2013) Section III Alternative Model for Personality Disorders (AMPD) represents a novel approach to the diagnosis of personality disorder (PD). In this model, PD diagnosis requires evaluation of level of impairment in personality functioning (Criterion A) and characterization by pathological traits (Criterion B). Questions about clinical utility, complexity, and difficulty in learning and using the AMPD have been expressed in recent scholarly literature. We examined the learnability, interrater reliability, and clinical utility of the AMPD using a vignette methodology and graduate student raters. Results showed thatmore » student clinicians can learn Criterion A of the AMPD to a high level of interrater reliability and agreement with expert ratings. Interrater reliability of the 25 trait facets of the AMPD varied but showed overall acceptable levels of agreement. Examination of severity indexes of PD impairment showed the level of personality functioning (LPF) added information beyond that of global assessment of functioning (GAF). Clinical utility ratings were generally strong. Lastly, the satisfactory interrater reliability of components of the AMPD indicates the model, including the LPF, is very learnable.« less
Personality Subtypes of Suicidal Adults
Westen, Drew; Bradley, Rebekah
2009-01-01
Research into personality factors related to suicidality suggests substantial variability among suicide attempters. A potentially useful approach that accounts for this complexity is personality subtyping. As part of a large sample looking at personality pathology, this study used Q-factor analysis to identify subtypes of 311 adult suicide attempters using SWAP-II personality profiles. Identified subtypes included Internalizing, Emotionally Dysregulated, Dependent, Hostile-Isolated, Psychopathic, and Anxious-Somatizing. Subtypes differed in hypothesized ways on criterion variables that address their construct validity, including adaptive functioning, Axis I and II comorbidity, and etiology-related variables (e.g., history of abuse). Furthermore, dimensional ratings of the subtypes predicted adaptive functioning above DSM-based diagnoses and symptoms. PMID:19752649
McBride, Orla; Adamson, Gary; Bunting, Brendan P; McCann, Siobhan
2009-01-01
Research has demonstrated that diagnostic orphans (i.e. individuals who experience only one to two criteria of DSM-IV alcohol dependence) can encounter significant health problems. Using the SF-12v2, this study examined the general health functioning of alcohol users, and in particular, diagnostic orphans. Current drinkers (n = 26,913) in the National Epidemiologic Survey on Alcohol and Related Conditions were categorized into five diagnosis groups: no alcohol use disorder (no-AUD), one-criterion orphans, two-criterion orphans, alcohol abuse and alcohol dependence. Latent variable modelling was used to assess the associations between the physical and mental health factors of the SF-12v2 and the diagnosis groups and a variety of background variables. In terms of mental health, one-criterion orphans had significantly better health than two-criterion orphans and the dependence group, but poorer health than the no-AUD group. No significant differences were evident between the one-criterion orphan group and the alcohol abuse group. One-criterion orphans had significantly poorer physical health when compared to the no-AUD group. One- and two-criterion orphans did not differ in relation to physical health. Consistent with previous research, diagnostic orphans in the current study appear to have experienced clinically relevant symptoms of alcohol dependence. The current findings suggest that diagnostic orphans may form part of an alcohol use disorders spectrum severity.
Disagreement between Parent and Adolescent Reports of Functional Impairment
ERIC Educational Resources Information Center
Kramer, Teresa L.; Phillips, Susan D.; Hargis, Michael B.; Miller, Terri L.; Burns, Barbara J.; Robbins, James M.
2004-01-01
Objective: Adolescents' functional impairment has become increasingly important as a criterion for diagnosis and service eligibility as well as a target of therapeutic intervention in mental health settings. This study examines three critical issues in measuring functioning: 1) agreement between parent and adolescent reports of functioning, 2)…
ERIC Educational Resources Information Center
Anselmo, Giancarlo A.; Yarbrough, Jamie L.; Kovaleski, Joseph F.; Tran, Vi N.
2017-01-01
This study analyzed the relationship between benchmark scores from two curriculum-based measurement probes in mathematics (M-CBM) and student performance on a state-mandated high-stakes test. Participants were 298 students enrolled in grades 7 and 8 in a rural southeastern school. Specifically, we calculated the criterion-related and predictive…
Hao, Xu; Yujun, Sun; Xinjie, Wang; Jin, Wang; Yao, Fu
2015-01-01
A multiple linear model was developed for individual tree crown width of Cunninghamia lanceolata (Lamb.) Hook in Fujian province, southeast China. Data were obtained from 55 sample plots of pure China-fir plantation stands. An Ordinary Linear Least Squares (OLS) regression was used to establish the crown width model. To adjust for correlations between observations from the same sample plots, we developed one level linear mixed-effects (LME) models based on the multiple linear model, which take into account the random effects of plots. The best random effects combinations for the LME models were determined by the Akaike's information criterion, the Bayesian information criterion and the -2logarithm likelihood. Heteroscedasticity was reduced by three residual variance functions: the power function, the exponential function and the constant plus power function. The spatial correlation was modeled by three correlation structures: the first-order autoregressive structure [AR(1)], a combination of first-order autoregressive and moving average structures [ARMA(1,1)], and the compound symmetry structure (CS). Then, the LME model was compared to the multiple linear model using the absolute mean residual (AMR), the root mean square error (RMSE), and the adjusted coefficient of determination (adj-R2). For individual tree crown width models, the one level LME model showed the best performance. An independent dataset was used to test the performance of the models and to demonstrate the advantage of calibrating LME models.
Comparison of case note review methods for evaluating quality and safety in health care.
Hutchinson, A; Coster, J E; Cooper, K L; McIntosh, A; Walters, S J; Bath, P A; Pearson, M; Young, T A; Rantell, K; Campbell, M J; Ratcliffe, J
2010-02-01
To determine which of two methods of case note review--holistic (implicit) and criterion-based (explicit)--provides the most useful and reliable information for quality and safety of care, and the level of agreement within and between groups of health-care professionals when they use the two methods to review the same record. To explore the process-outcome relationship between holistic and criterion-based quality-of-care measures and hospital-level outcome indicators. Case notes of patients at randomly selected hospitals in England. In the first part of the study, retrospective multiple reviews of 684 case notes were undertaken at nine acute hospitals using both holistic and criterion-based review methods. Quality-of-care measures included evidence-based review criteria and a quality-of-care rating scale. Textual commentary on the quality of care was provided as a component of holistic review. Review teams comprised combinations of: doctors (n = 16), specialist nurses (n = 10) and clinically trained audit staff (n = 3) and non-clinical audit staff (n = 9). In the second part of the study, process (quality and safety) of care data were collected from the case notes of 1565 people with either chronic obstructive pulmonary disease (COPD) or heart failure in 20 hospitals. Doctors collected criterion-based data from case notes and used implicit review methods to derive textual comments on the quality of care provided and score the care overall. Data were analysed for intrarater consistency, inter-rater reliability between pairs of staff using intraclass correlation coefficients (ICCs) and completeness of criterion data capture, and comparisons were made within and between staff groups and between review methods. To explore the process-outcome relationship, a range of publicly available health-care indicator data were used as proxy outcomes in a multilevel analysis. Overall, 1473 holistic and 1389 criterion-based reviews were undertaken in the first part of the study. When same staff-type reviewer pairs/groups reviewed the same record, holistic scale score inter-rater reliability was moderate within each of the three staff groups [intraclass correlation coefficient (ICC) 0.46-0.52], and inter-rater reliability for criterion-based scores was moderate to good (ICC 0.61-0.88). When different staff-type pairs/groups reviewed the same record, agreement between the reviewer pairs/groups was weak to moderate for overall care (ICC 0.24-0.43). Comparison of holistic review score and criterion-based score of case notes reviewed by doctors and by non-clinical audit staff showed a reasonable level of agreement (p-values for difference 0.406 and 0.223, respectively), although results from all three staff types showed no overall level of agreement (p-value for difference 0.057). Detailed qualitative analysis of the textual data indicated that the three staff types tended to provide different forms of commentary on quality of care, although there was some overlap between some groups. In the process-outcome study there generally were high criterion-based scores for all hospitals, whereas there was more interhospital variation between the holistic review overall scale scores. Textual commentary on the quality of care verified the holistic scale scores. Differences among hospitals with regard to the relationship between mortality and quality of care were not statistically significant. Using the holistic approach, the three groups of staff appeared to interpret the recorded care differently when they each reviewed the same record. When the same clinical record was reviewed by doctors and non-clinical audit staff, there was no significant difference between the assessments of quality of care generated by the two groups. All three staff groups performed reasonably well when using criterion-based review, although the quality and type of information provided by doctors was of greater value. Therefore, when measuring quality of care from case notes, consideration needs to be given to the method of review, the type of staff undertaking the review, and the methods of analysis available to the review team. Review can be enhanced using a combination of both criterion-based and structured holistic methods with textual commentary, and variation in quality of care can best be identified from a combination of holistic scale scores and textual data review.
20 CFR 404.1569 - Listing of Medical-Vocational Guidelines in appendix 2.
Code of Federal Regulations, 2011 CFR
2011-04-01
... factors and residual functional capacity is not the same as the corresponding criterion of a rule. In... national economy. Appendix 2 provides rules using this data reflecting major functional and vocational...
A Thomistic defense of whole-brain death.
Eberl, Jason T
2015-08-01
Michel Accad critiques the currently accepted whole-brain criterion for determining the death of a human being from a Thomistic metaphysical perspective and, in so doing, raises objections to a particular argument defending the whole-brain criterion by Patrick Lee and Germain Grisez. In this paper, I will respond to Accad's critique of the whole-brain criterion and defend its continued validity as a criterion for determining when a human being's death has occurred in accord with Thomistic metaphysical principles. I will, however, join Accad in criticizing Lee and Grisez's proposed defense of the whole-brain criterion as potentially leading to erroneous conclusions regarding the determination of human death. Lay summary: Catholic physicians and bioethicists currently debate the legally accepted clinical standard for determining when a human being has died-known as the "wholebrain criterion"-which has also been morally affirmed by the Magisterium. This paper responds to physician Michel Accad's critique of the whole-brain criterion based upon St. Thomas Aquinas's metaphysical account of human nature as a union of a rational soul and a material body. I defend the whole-brain criterion from the same Thomistic philosophical perspective, while agreeing with Accad's objection to an alternative Thomistic defense of whole-brain death by philosophers Patrick Lee and Germain Grisez.
Carballeira, C; Ramos-Gómez, J; Martín-Díaz, L; DelValls, T A
2012-06-01
Standard toxicity screening tests are useful tools in the management of impacted coastal ecosystems. To our knowledge, this is the first time that the sea urchin embryo development test has been used to evaluate the potential impact of effluents from land-based aquaculture farms in coastal areas. The toxicity of effluents from 8 land-based turbot farms was determined by calculating the percentage of abnormal larvae, according to two criteria: (a) standard, considering as normal pyramid-shaped larvae with differentiated components, and (b) skeletal, a new criterion that considers detailed skeletal characteristics. The skeletal criterion appeared to be more sensitive and enabled calculation of effective concentrations EC(5), EC(10), EC(20) and EC(50), unlike the classical criterion. Inclusion of the skeleton criterion in the sea urchin embryo development test may be useful for categorizing the relatively low toxicity of discharges from land-based marine fish farms. Further studies are encouraged to establish any causative relationships between pollutants and specific larval deformities. Copyright © 2012 Elsevier Ltd. All rights reserved.
Chen, Liang-Hsuan; Hsueh, Chan-Ching
2007-06-01
Fuzzy regression models are useful to investigate the relationship between explanatory and response variables with fuzzy observations. Different from previous studies, this correspondence proposes a mathematical programming method to construct a fuzzy regression model based on a distance criterion. The objective of the mathematical programming is to minimize the sum of distances between the estimated and observed responses on the X axis, such that the fuzzy regression model constructed has the minimal total estimation error in distance. Only several alpha-cuts of fuzzy observations are needed as inputs to the mathematical programming model; therefore, the applications are not restricted to triangular fuzzy numbers. Three examples, adopted in the previous studies, and a larger example, modified from the crisp case, are used to illustrate the performance of the proposed approach. The results indicate that the proposed model has better performance than those in the previous studies based on either distance criterion or Kim and Bishu's criterion. In addition, the efficiency and effectiveness for solving the larger example by the proposed model are also satisfactory.
Cox Regression Models with Functional Covariates for Survival Data.
Gellar, Jonathan E; Colantuoni, Elizabeth; Needham, Dale M; Crainiceanu, Ciprian M
2015-06-01
We extend the Cox proportional hazards model to cases when the exposure is a densely sampled functional process, measured at baseline. The fundamental idea is to combine penalized signal regression with methods developed for mixed effects proportional hazards models. The model is fit by maximizing the penalized partial likelihood, with smoothing parameters estimated by a likelihood-based criterion such as AIC or EPIC. The model may be extended to allow for multiple functional predictors, time varying coefficients, and missing or unequally-spaced data. Methods were inspired by and applied to a study of the association between time to death after hospital discharge and daily measures of disease severity collected in the intensive care unit, among survivors of acute respiratory distress syndrome.
Stratiform and Convective Rain Discrimination from Microwave Radiometer Observations
NASA Technical Reports Server (NTRS)
Prabhakara, C.; Cadeddu, M.; Short, D. A.; Weinman, J. A.; Schols, J. L.; Haferman, J.
1997-01-01
A criterion based on the SSM/I observations is developed to discriminate rain into convective and stratiform types. This criterion depends on the microwave polarization properties of the flat melting snow particles that fall slowly in the stratiform clouds. Utilizing this criterion and some spatial and temporal characteristics of hydrometeors in TOGA-COARE area revealed by ship borne radars, we have developed an algorithm to retrieve convective and stratiform rain rate from SSM/I data.
34 CFR 489.5 - What definitions apply?
Code of Federal Regulations, 2010 CFR
2010-07-01
..., DEPARTMENT OF EDUCATION FUNCTIONAL LITERACY FOR STATE AND LOCAL PRISONERS PROGRAM General § 489.5 What...— Functional literacy means at least an eighth grade equivalence, or a functional criterion score, on a nationally recognized literacy assessment. Local correctional agency means any agency of local government...
NASA Technical Reports Server (NTRS)
Ricks, Wendell R.; Abbott, Kathy H.
1987-01-01
A traditional programming technique for controlling the display of optional flight information in a civil transport cockpit is compared to a rule-based technique for the same function. This application required complex decision logic and a frequently modified rule base. The techniques are evaluated for execution efficiency and implementation ease; the criterion used to calculate the execution efficiency is the total number of steps required to isolate hypotheses that were true and the criteria used to evaluate the implementability are ease of modification and verification and explanation capability. It is observed that the traditional program is more efficient than the rule-based program; however, the rule-based programming technique is more applicable for improving programmer productivity.
How much is enough? Examining frequency criteria for NSSI disorder in adolescent inpatients.
Muehlenkamp, Jennifer J; Brausch, Amy M; Washburn, Jason J
2017-06-01
To empirically evaluate the diagnostic relevance of the proposed Diagnostic and Statistical Manual of Mental Disorders (5th ed.; DSM-5 ; APA, 2013) Criterion-A frequency threshold for nonsuicidal self-injury (NSSI) disorder. Archival, de-identified, self-reported clinical assessment data from 746 adolescent psychiatric patients (Mage = 14.97; 88% female; 76% White) were used. The sample was randomly split into 2 unique samples for data analyses. Measures included assessments of NSSI, proposed DSM-5 NSSI-disorder criteria, psychopathology, dysfunction, distress, functional impairment, and suicidality. Discriminant-function analyses run with Sample A identified a significant differentiation of groups based on a frequency of NSSI at 25 or more days in the past year, Λ = .814, χ2(54) = 72.59, p < .05, canonical R2 = .36. This cutoff was replicated in the second sample. All patients were coded into 1 of 3 empirically derived NSSI-frequency cutoff groups: high (>25 days), moderate (5-24 days), and low (1-4 days) and compared. The high-NSSI group scored higher on most NSSI features, including DSM-5 -proposed Criterion-B and -C symptoms, depression, psychotic symptoms, substance abuse, borderline personality-disorder features, suicidal ideation, and suicide plans, than the moderate- and low-NSSI groups, who did not differ from each other on many of the variables. The currently proposed DSM-5 Criterion-A frequency threshold for NSSI disorder lacks validity and clinical utility. The field needs to consider raising the frequency threshold to ensure that a meaningful and valid set of diagnostic criteria are established, and to avoid overpathologizing individuals who infrequently engage in NSSI. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Urey prize lecture: On the diversity of plausible planetary systems
NASA Technical Reports Server (NTRS)
Lissauer, J. J.
1995-01-01
Models of planet formation and of the orbital stability of planetary systems are used to predict the variety of planetary and satellite systems that may be present within our galaxy. A new approximate global criterion for orbital stability of planetary systems based on an extension of the local resonance overlap criterion is proposed. This criterion implies that at least some of Uranus' small inner moons are significantly less massive than predicted by estimates based on Voyager volumes and densities assumed to equal that of Miranda. Simple calculations (neglecting planetary gravity) suggest that giant planets which acrete substantial amounts of gas while their envelopes are extremely distended ultimately rotate rapidly in the prgrade direction.
Does Extended Preoperative Rehabilitation Influence Outcomes 2 Years After ACL Reconstruction?
Failla, Mathew J.; Logerstedt, David S.; Grindem, Hege; Axe, Michael J.; Risberg, May Arna; Engebretsen, Lars; Huston, Laura J.; Spindler, Kurt P.; Snyder-Mackler, Lynn
2017-01-01
Background Rehabilitation before anterior cruciate ligament (ACL) reconstruction (ACLR) is effective at improving postoperative outcomes at least in the short term. Less is known about the effects of preoperative rehabilitation on functional outcomes and return-to-sport (RTS) rates 2 years after reconstruction. Purpose/Hypothesis The purpose of this study was to compare functional outcomes 2 years after ACLR in a cohort that underwent additional preoperative rehabilitation, including progressive strengthening and neuromuscular training after impairments were resolved, compared with a nonexperimental cohort. We hypothesized that the cohort treated with extended preoperative rehabilitation would have superior functional outcomes 2 years after ACLR. Study Design Cohort study; Level of evidence, 3. Methods This study compared outcomes after an ACL rupture in an international cohort (Delaware-Oslo ACL Cohort [DOC]) treated with extended preoperative rehabilitation, including neuromuscular training, to data from the Multicenter Orthopaedic Outcomes Network (MOON) cohort, which did not undergo extended preoperative rehabilitation. Inclusion and exclusion criteria from the DOC were applied to the MOON database to extract a homogeneous sample for comparison. Patients achieved knee impairment resolution before ACLR, and postoperative rehabilitation followed each cohort's respective criterion-based protocol. Patients completed the International Knee Documentation Committee (IKDC) subjective knee form and Knee injury and Osteoarthritis Outcome Score (KOOS) at enrollment and again 2 years after ACLR. RTS rates were calculated for each cohort at 2 years. Results After adjusting for baseline IKDC and KOOS scores, the DOC patients showed significant and clinically meaningful differences in IKDC and KOOS scores 2 years after ACLR. There was a significantly higher (P < .001) percentage of DOC patients returning to preinjury sports (72%) compared with those in the MOON cohort (63%). Conclusion The cohort treated with additional preoperative rehabilitation consisting of progressive strengthening and neuromuscular training, followed by a criterion-based postoperative rehabilitation program, had greater functional outcomes and RTS rates 2 years after ACLR. Preoperative rehabilitation should be considered as an addition to the standard of care to maximize functional outcomes after ACLR. PMID:27416993
Brittle failure of rock: A review and general linear criterion
NASA Astrophysics Data System (ADS)
Labuz, Joseph F.; Zeng, Feitao; Makhnenko, Roman; Li, Yuan
2018-07-01
A failure criterion typically is phenomenological since few models exist to theoretically derive the mathematical function. Indeed, a successful failure criterion is a generalization of experimental data obtained from strength tests on specimens subjected to known stress states. For isotropic rock that exhibits a pressure dependence on strength, a popular failure criterion is a linear equation in major and minor principal stresses, independent of the intermediate principal stress. A general linear failure criterion called Paul-Mohr-Coulomb (PMC) contains all three principal stresses with three material constants: friction angles for axisymmetric compression ϕc and extension ϕe and isotropic tensile strength V0. PMC provides a framework to describe a nonlinear failure surface by a set of planes "hugging" the curved surface. Brittle failure of rock is reviewed and multiaxial test methods are summarized. Equations are presented to implement PMC for fitting strength data and determining the three material parameters. A piecewise linear approximation to a nonlinear failure surface is illustrated by fitting two planes with six material parameters to form either a 6- to 12-sided pyramid or a 6- to 12- to 6-sided pyramid. The particular nature of the failure surface is dictated by the experimental data.
NASA Astrophysics Data System (ADS)
Park, Ju H.; Kwon, O. M.
In the letter, the global asymptotic stability of bidirectional associative memory (BAM) neural networks with delays is investigated. The delay is assumed to be time-varying and belongs to a given interval. A novel stability criterion for the stability is presented based on the Lyapunov method. The criterion is represented in terms of linear matrix inequality (LMI), which can be solved easily by various optimization algorithms. Two numerical examples are illustrated to show the effectiveness of our new result.
A Multi-Objective Decision Making Approach for Solving the Image Segmentation Fusion Problem.
Khelifi, Lazhar; Mignotte, Max
2017-08-01
Image segmentation fusion is defined as the set of methods which aim at merging several image segmentations, in a manner that takes full advantage of the complementarity of each one. Previous relevant researches in this field have been impeded by the difficulty in identifying an appropriate single segmentation fusion criterion, providing the best possible, i.e., the more informative, result of fusion. In this paper, we propose a new model of image segmentation fusion based on multi-objective optimization which can mitigate this problem, to obtain a final improved result of segmentation. Our fusion framework incorporates the dominance concept in order to efficiently combine and optimize two complementary segmentation criteria, namely, the global consistency error and the F-measure (precision-recall) criterion. To this end, we present a hierarchical and efficient way to optimize the multi-objective consensus energy function related to this fusion model, which exploits a simple and deterministic iterative relaxation strategy combining the different image segments. This step is followed by a decision making task based on the so-called "technique for order performance by similarity to ideal solution". Results obtained on two publicly available databases with manual ground truth segmentations clearly show that our multi-objective energy-based model gives better results than the classical mono-objective one.
Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús
2016-01-01
The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.
An error criterion for determining sampling rates in closed-loop control systems
NASA Technical Reports Server (NTRS)
Brecher, S. M.
1972-01-01
The determination of an error criterion which will give a sampling rate for adequate performance of linear, time-invariant closed-loop, discrete-data control systems was studied. The proper modelling of the closed-loop control system for characterization of the error behavior, and the determination of an absolute error definition for performance of the two commonly used holding devices are discussed. The definition of an adequate relative error criterion as a function of the sampling rate and the parameters characterizing the system is established along with the determination of sampling rates. The validity of the expressions for the sampling interval was confirmed by computer simulations. Their application solves the problem of making a first choice in the selection of sampling rates.
Avoidance symptoms and assessment of posttraumatic stress disorder in Arab immigrant women.
Norris, Anne E; Aroian, Karen J
2008-10-01
This study investigates whether the avoidance symptom criterion required for a Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition (DSM-IV; American Psychiatric Association, 1994) diagnosis of posttraumatic stress disorder (PTSD) is overly conservative. Arab immigrant women (N = 453), many of whom reported experiencing multiple traumatic events, completed the Posttraumatic Diagnostic Scale in Arabic as part of a face to face interview. Analyses indicated all but one avoidance symptom was reported less frequently than reexperiencing and arousal symptoms. However, those who fully met reexperiencing, avoidance, and arousal symptom criteria had worse symptom severity and functioning than those who fully met reexperiencing and arousal symptom criteria, but only partially met avoidance symptom criterion. Study findings support importance of the PTSD avoidance symptom criterion.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonova, A. O., E-mail: aoantonova@mail.ru; Savyolova, T. I.
2016-05-15
A two-dimensional mathematical model of a polycrystalline sample and an experiment on electron backscattering diffraction (EBSD) is considered. The measurement parameters are taken to be the scanning step and threshold grain-boundary angle. Discrete pole figures for materials with hexagonal symmetry have been calculated based on the results of the model experiment. Discrete and smoothed (by the kernel method) pole figures of the model sample and the samples in the model experiment are compared using homogeneity criterion χ{sup 2}, an estimate of the pole figure maximum and its coordinate, a deviation of the pole figures of the model in the experimentmore » from the sample in the space of L{sub 1} measurable functions, and the RP-criterion for estimating the pole figure errors. Is is shown that the problem of calculating pole figures is ill-posed and their determination with respect to measurement parameters is not reliable.« less
A game theoretic approach to a finite-time disturbance attenuation problem
NASA Technical Reports Server (NTRS)
Rhee, Ihnseok; Speyer, Jason L.
1991-01-01
A disturbance attenuation problem over a finite-time interval is considered by a game theoretic approach where the control, restricted to a function of the measurement history, plays against adversaries composed of the process and measurement disturbances, and the initial state. A zero-sum game, formulated as a quadratic cost criterion subject to linear time-varying dynamics and measurements, is solved by a calculus of variation technique. By first maximizing the quadratic cost criterion with respect to the process disturbance and initial state, a full information game between the control and the measurement residual subject to the estimator dynamics results. The resulting solution produces an n-dimensional compensator which expresses the controller as a linear combination of the measurement history. A disturbance attenuation problem is solved based on the results of the game problem. For time-invariant systems it is shown that under certain conditions the time-varying controller becomes time-invariant on the infinite-time interval. The resulting controller satisfies an H(infinity) norm bound.
NASA Astrophysics Data System (ADS)
Thiébaut, E.; Goupil, C.; Pesty, F.; D'Angelo, Y.; Guegan, G.; Lecoeur, P.
2017-12-01
Increasing the maximum cooling effect of a Peltier cooler can be achieved through material and device design. The use of inhomogeneous, functionally graded materials may be adopted in order to increase maximum cooling without improvement of the Z T (figure of merit); however, these systems are usually based on the assumption that the local optimization of the Z T is the suitable criterion to increase thermoelectric performance. We solve the heat equation in a graded material and perform both analytical and numerical analysis of a graded Peltier cooler. We find a local criterion that we use to assess the possible improvement of graded materials for thermoelectric cooling. A fair improvement of the cooling effect (up to 36%) is predicted for semiconductor materials, and the best graded system for cooling is described. The influence of the equation of state of the electronic gas of the material is discussed, and the difference in term of entropy production between the graded and the classical system is also described.
Allergic constitution theory of Chinese medicine and its assessment criterion and related studies.
Wang, Ji; Wang, Ting; Li, Ying-shuai; Li, Ling-ru; Zheng, Yan-fei; Wang, Qi
2015-09-01
Constitution factor plays an important role in the occurrence, development, and transformation of diseases. The occurrence of allergic diseases is mainly caused by the disorganized physiological function and suitability regulation of patients, except for their exposure to outside allergens. Moreover, it represents susceptibility and hypersensitivity to allergens. The current study expresses the concept of allergic constitution from the perspective of Chinese medicine (CM) and presents the criterion of allergic constitution. In addition, the distribution of allergic constitution in population, its factors, and its relation to health-related quality of life (HRQOL) were investigated. The HRQOL scores of allergic constitution were found to be lower than those of the Pinghe constitution. After making a study on the gene expression profile of allergic constitution, the characteristics of up-regulated or down-regulated genes were found. Finally, CM drug was researched and developed to improve allergic constitution. Based on clinical trials and animal experiments, CM is found to have good regulatory effects on allergic constitution.
Beehler, Sarah; Ahern, Jennifer; Balmer, Brandi; Kuhlman, Jennifer
2017-01-01
This pilot study evaluated the validity and reliability of an Experience of Neighborhood (EON) measure developed to assess neighborhood characteristics that shape reintegration opportunities for returning service members and their families. A total of 91 post-9/11 veterans and spouses completed a survey administered at the Minnesota State Fair. Participants self-reported on their reintegration status (veterans), social functioning (spouses), social support, and mental health. EON factor structure, internal consistency reliability, and validity (discriminant, content, criterion) were analyzed. The EON measure showed adequate reliability, discriminant validity, and content validity. More work is needed to assess criterion validity because EON scores were not correlated with scores on a Census-based index used to measure quality of military neighborhoods. The EON may be useful in assessing broad local factors influencing health among returning veterans and spouses. More research is needed to understand geographic variation in neighborhood conditions and how those affect reintegration and mental health for military families.
A critical evaluation of theories for predicting microcracking in composite laminates
NASA Technical Reports Server (NTRS)
Nairn, John A.; Hu, Shoufeng; Bark, Jong S.
1993-01-01
We present experimental results on 21 different layups of Hercules AS4 carbon fiber/3501-6 epoxy laminates. All laminates had 90 deg plies; some had them in the middle, while some had them on a free surface. During tensile loading, the first form of damage in all laminates was microcracking of the 90 deg plies. For each laminate, we recorded both the crack density and the complete distribution of crack spacings as a function of the applied load. By rearranging various microcracking theories, we developed a master-curve approach that permitted plotting the results from all laminates on a single plot. By comparing master-curve plots for different theories, it was possible to critically evaluate the quality of those theories. We found that a critical-energy-release-rate criterion calculated using a 2D variational stress analysis gave the best results. All microcracking theories based on a strength-failure criteria gave poor results. All microcracking theories using 1D stress analyses, regardless of the failure criterion, also gave poor results.
Beehler, Sarah; Ahern, Jennifer; Balmer, Brandi; Kuhlman, Jennifer
2017-01-01
This pilot study evaluated the validity and reliability of an Experience of Neighborhood (EON) measure developed to assess neighborhood characteristics that shape reintegration opportunities for returning service members and their families. A total of 91 post-9/11 veterans and spouses completed a survey administered at the Minnesota State Fair. Participants self-reported on their reintegration status (veterans), social functioning (spouses), social support, and mental health. EON factor structure, internal consistency reliability, and validity (discriminant, content, criterion) were analyzed. The EON measure showed adequate reliability, discriminant validity, and content validity. More work is needed to assess criterion validity because EON scores were not correlated with scores on a Census-based index used to measure quality of military neighborhoods. The EON may be useful in assessing broad local factors influencing health among returning veterans and spouses. More research is needed to understand geographic variation in neighborhood conditions and how those affect reintegration and mental health for military families. PMID:28936370
Sara McAllister; Mark Finney; Jack Cohen
2010-01-01
Extreme weather often contributes to crown fires, where the fire spreads from one tree crown to the next as a series of piloted ignitions. An important aspect in predicting crown fires is understanding the ignition of fuel particles. The ignition criterion considered in this work is the critical mass flux criterion â that a sufficient amount of pyrolysis gases must be...
Blind equalization with criterion with memory nonlinearity
NASA Astrophysics Data System (ADS)
Chen, Yuanjie; Nikias, Chrysostomos L.; Proakis, John G.
1992-06-01
Blind equalization methods usually combat the linear distortion caused by a nonideal channel via a transversal filter, without resorting to the a priori known training sequences. We introduce a new criterion with memory nonlinearity (CRIMNO) for the blind equalization problem. The basic idea of this criterion is to augment the Godard [or constant modulus algorithm (CMA)] cost function with additional terms that penalize the autocorrelations of the equalizer outputs. Several variations of the CRIMNO algorithms are derived, with the variations dependent on (1) whether the empirical averages or the single point estimates are used to approximate the expectations, (2) whether the recent or the delayed equalizer coefficients are used, and (3) whether the weights applied to the autocorrelation terms are fixed or are allowed to adapt. Simulation experiments show that the CRIMNO algorithm, and especially its adaptive weight version, exhibits faster convergence speed than the Godard (or CMA) algorithm. Extensions of the CRIMNO criterion to accommodate the case of correlated inputs to the channel are also presented.
Stinchfield, Randy; McCready, John; Turner, Nigel E; Jimenez-Murcia, Susana; Petry, Nancy M; Grant, Jon; Welte, John; Chapman, Heather; Winters, Ken C
2016-09-01
The DSM-5 was published in 2013 and it included two substantive revisions for gambling disorder (GD). These changes are the reduction in the threshold from five to four criteria and elimination of the illegal activities criterion. The purpose of this study was to twofold. First, to assess the reliability, validity and classification accuracy of the DSM-5 diagnostic criteria for GD. Second, to compare the DSM-5-DSM-IV on reliability, validity, and classification accuracy, including an examination of the effect of the elimination of the illegal acts criterion on diagnostic accuracy. To compare DSM-5 and DSM-IV, eight datasets from three different countries (Canada, USA, and Spain; total N = 3247) were used. All datasets were based on similar research methods. Participants were recruited from outpatient gambling treatment services to represent the group with a GD and from the community to represent the group without a GD. All participants were administered a standardized measure of diagnostic criteria. The DSM-5 yielded satisfactory reliability, validity and classification accuracy. In comparing the DSM-5 to the DSM-IV, most comparisons of reliability, validity and classification accuracy showed more similarities than differences. There was evidence of modest improvements in classification accuracy for DSM-5 over DSM-IV, particularly in reduction of false negative errors. This reduction in false negative errors was largely a function of lowering the cut score from five to four and this revision is an improvement over DSM-IV. From a statistical standpoint, eliminating the illegal acts criterion did not make a significant impact on diagnostic accuracy. From a clinical standpoint, illegal acts can still be addressed in the context of the DSM-5 criterion of lying to others.
Decohesion models informed by first-principles calculations: The ab initio tensile test
NASA Astrophysics Data System (ADS)
Enrique, Raúl A.; Van der Ven, Anton
2017-10-01
Extreme deformation and homogeneous fracture can be readily studied via ab initio methods by subjecting crystals to numerical "tensile tests", where the energy of locally stable crystal configurations corresponding to elongated and fractured states are evaluated by means of density functional method calculations. The information obtained can then be used to construct traction curves of cohesive zone models in order to address fracture at the macroscopic scale. In this work, we perform an in depth analysis of traction curves and how ab initio calculations must be interpreted to rigorously parameterize an atomic scale cohesive zone model, using crystalline Ag as an example. Our analysis of traction curves reveal the existence of two qualitatively distinct decohesion criteria: (i) an energy criterion whereby the released elastic energy equals the energy cost of creating two new surfaces and (ii) an instability criterion that occurs at a higher and size independent stress than that of the energy criterion. We find that increasing the size of the simulation cell renders parts of the traction curve inaccessible to ab initio calculations involving the uniform decohesion of the crystal. We also find that the separation distance below which a crack heals is not a material parameter as has been proposed in the past. Finally, we show that a large energy barrier separates the uniformly stressed crystal from the decohered crystal, resolving a paradox predicted by a scaling law based on the energy criterion that implies that large crystals will decohere under vanishingly small stresses. This work clarifies confusion in the literature as to how a cohesive zone model is to be parameterized with ab initio "tensile tests" in the presence of internal relaxations.
Evolution of canalizing Boolean networks
NASA Astrophysics Data System (ADS)
Szejka, A.; Drossel, B.
2007-04-01
Boolean networks with canalizing functions are used to model gene regulatory networks. In order to learn how such networks may behave under evolutionary forces, we simulate the evolution of a single Boolean network by means of an adaptive walk, which allows us to explore the fitness landscape. Mutations change the connections and the functions of the nodes. Our fitness criterion is the robustness of the dynamical attractors against small perturbations. We find that with this fitness criterion the global maximum is always reached and that there is a huge neutral space of 100% fitness. Furthermore, in spite of having such a high degree of robustness, the evolved networks still share many features with “chaotic” networks.
Vortex Advisory System Safety Analysis : Volume 1. Analytical Model
DOT National Transportation Integrated Search
1978-09-01
The Vortex Advisory System (VAS) is based on wind criterion--when the wind near the runway end is outside of the criterion, all interarrival Instrument Flight Rules (IFR) aircraft separations can be set at 3 nautical miles. Five years of wind data ha...
Vortex Advisory System : Volume 1. Effectiveness for Selected Airports.
DOT National Transportation Integrated Search
1980-05-01
The Vortex Advisory System (VAS) is based on wind criterion--when the wind near the runway end is outside of the criterion, all interarrival Instrument Flight Rules (IFR) aircraft separations can be set at 3 nautical miles. Five years of wind data ha...
20 CFR 220.134 - Medical-vocational guidelines in appendix 2 of this part.
Code of Federal Regulations, 2011 CFR
2011-04-01
... vocational factors and residual functional capacity is not the same as the corresponding criterion of a rule... economy. Appendix 2 of this part provides rules using this data reflecting major functional and vocational...
Criterion-based clinical audit in obstetrics: bridging the quality gap?
Graham, W J
2009-06-01
The Millennium Development Goal 5 - reducing maternal mortality by 75% - is unlikely to be met globally and for the majority of low-income countries. At this time of heightened concern to scale-up services for mothers and babies, it is crucial that not only shortfalls in the quantity of care - in terms of location and financial access - are addressed, but also the quality. Reductions in maternal and perinatal mortality in the immediate term depend in large part on the timely delivery of effective practices in the management of life-threatening complications. Such practices require a functioning health system - including skilled and motivated providers engaged with the women and communities whom they serve. Assuring the quality of this system, the services and the care that women receive requires many inputs, including effective and efficient monitoring mechanisms. The purpose of this article is to summarise the practical steps involved in applying one such mechanism, criterion-based clinical audit (CBCA), and to highlight recent lessons from its application in developing countries. Like all audit tools, the ultimate worth of CBCA relates to the action it stimulates in the health system and among providers.
Multispectral image fusion for illumination-invariant palmprint recognition
Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied. PMID:28558064
Multispectral image fusion for illumination-invariant palmprint recognition.
Lu, Longbin; Zhang, Xinman; Xu, Xuebin; Shang, Dongpeng
2017-01-01
Multispectral palmprint recognition has shown broad prospects for personal identification due to its high accuracy and great stability. In this paper, we develop a novel illumination-invariant multispectral palmprint recognition method. To combine the information from multiple spectral bands, an image-level fusion framework is completed based on a fast and adaptive bidimensional empirical mode decomposition (FABEMD) and a weighted Fisher criterion. The FABEMD technique decomposes the multispectral images into their bidimensional intrinsic mode functions (BIMFs), on which an illumination compensation operation is performed. The weighted Fisher criterion is to construct the fusion coefficients at the decomposition level, making the images be separated correctly in the fusion space. The image fusion framework has shown strong robustness against illumination variation. In addition, a tensor-based extreme learning machine (TELM) mechanism is presented for feature extraction and classification of two-dimensional (2D) images. In general, this method has fast learning speed and satisfying recognition accuracy. Comprehensive experiments conducted on the PolyU multispectral palmprint database illustrate that the proposed method can achieve favorable results. For the testing under ideal illumination, the recognition accuracy is as high as 99.93%, and the result is 99.50% when the lighting condition is unsatisfied.
Mixture Rasch model for guessing group identification
NASA Astrophysics Data System (ADS)
Siow, Hoo Leong; Mahdi, Rasidah; Siew, Eng Ling
2013-04-01
Several alternative dichotomous Item Response Theory (IRT) models have been introduced to account for guessing effect in multiple-choice assessment. The guessing effect in these models has been considered to be itemrelated. In the most classic case, pseudo-guessing in the three-parameter logistic IRT model is modeled to be the same for all the subjects but may vary across items. This is not realistic because subjects can guess worse or better than the pseudo-guessing. Derivation from the three-parameter logistic IRT model improves the situation by incorporating ability in guessing. However, it does not model non-monotone function. This paper proposes to study guessing from a subject-related aspect which is guessing test-taking behavior. Mixture Rasch model is employed to detect latent groups. A hybrid of mixture Rasch and 3-parameter logistic IRT model is proposed to model the behavior based guessing from the subjects' ways of responding the items. The subjects are assumed to simply choose a response at random. An information criterion is proposed to identify the behavior based guessing group. Results show that the proposed model selection criterion provides a promising method to identify the guessing group modeled by the hybrid model.
Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi
2013-07-01
The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis.
Kojima, Motohiro; Shimazaki, Hideyuki; Iwaya, Keiichi; Kage, Masayoshi; Akiba, Jun; Ohkura, Yasuo; Horiguchi, Shinichiro; Shomori, Kohei; Kushima, Ryoji; Ajioka, Yoichi; Nomura, Shogo; Ochiai, Atsushi
2013-01-01
Aims The goal of this study is to create an objective pathological diagnostic system for blood and lymphatic vessel invasion (BLI). Methods 1450 surgically resected colorectal cancer specimens from eight hospitals were reviewed. Our first step was to compare the current practice of pathology assessment among eight hospitals. Then, H&E stained slides with or without histochemical/immunohistochemical staining were assessed by eight pathologists and concordance of BLI diagnosis was checked. In addition, histological findings associated with BLI having good concordance were reviewed. Based on these results, framework for developing diagnostic criterion was developed, using the Delphi method. The new criterion was evaluated using 40 colorectal cancer specimens. Results Frequency of BLI diagnoses, number of blocks obtained and stained for assessment of BLI varied among eight hospitals. Concordance was low for BLI diagnosis and was not any better when histochemical/immunohistochemical staining was provided. All histological findings associated with BLI from H&E staining were poor in agreement. However, observation of elastica-stained internal elastic membrane covering more than half of the circumference surrounding the tumour cluster as well as the presence of D2-40-stained endothelial cells covering more than half of the circumference surrounding the tumour cluster showed high concordance. Based on this observation, we developed a framework for pathological diagnostic criterion, using the Delphi method. This criterion was found to be useful in improving concordance of BLI diagnosis. Conclusions A framework for pathological diagnostic criterion was developed by reviewing concordance and using the Delphi method. The criterion developed may serve as the basis for creating a standardised procedure for pathological diagnosis. PMID:23592799
Baldi, F; Alencar, M M; Albuquerque, L G
2010-12-01
The objective of this work was to estimate covariance functions using random regression models on B-splines functions of animal age, for weights from birth to adult age in Canchim cattle. Data comprised 49,011 records on 2435 females. The model of analysis included fixed effects of contemporary groups, age of dam as quadratic covariable and the population mean trend taken into account by a cubic regression on orthogonal polynomials of animal age. Residual variances were modelled through a step function with four classes. The direct and maternal additive genetic effects, and animal and maternal permanent environmental effects were included as random effects in the model. A total of seventeen analyses, considering linear, quadratic and cubic B-splines functions and up to seven knots, were carried out. B-spline functions of the same order were considered for all random effects. Random regression models on B-splines functions were compared to a random regression model on Legendre polynomials and with a multitrait model. Results from different models of analyses were compared using the REML form of the Akaike Information criterion and Schwarz' Bayesian Information criterion. In addition, the variance components and genetic parameters estimated for each random regression model were also used as criteria to choose the most adequate model to describe the covariance structure of the data. A model fitting quadratic B-splines, with four knots or three segments for direct additive genetic effect and animal permanent environmental effect and two knots for maternal additive genetic effect and maternal permanent environmental effect, was the most adequate to describe the covariance structure of the data. Random regression models using B-spline functions as base functions fitted the data better than Legendre polynomials, especially at mature ages, but higher number of parameters need to be estimated with B-splines functions. © 2010 Blackwell Verlag GmbH.
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
NASA Astrophysics Data System (ADS)
Park, N.; Huh, H.; Yoon, J. W.
2017-09-01
This paper deals with the prediction of fracture initiation in square cup drawing of DP980 steel sheet with the thickness of 1.2 mm. In an attempt to consider the influence of material anisotropy on the fracture initiation, an uncoupled anisotropic ductile fracture criterion is developed based on the Lou—Huh ductile fracture criterion. Tensile tests are carried out at different loading directions of 0°, 45°, and 90° to the rolling direction of the sheet using various specimen geometries including pure shear, dog-bone, and flat grooved specimens so as to calibrate the parameters of the proposed fracture criterion. Equivalent plastic strain distribution on the specimen surface is computed using Digital Image Correlation (DIC) method until surface crack initiates. The proposed fracture criterion is implemented into the commercial finite element code ABAQUS/Explicit by developing the Vectorized User-defined MATerial (VUMAT) subroutine which features the non-associated flow rule. Simulation results of the square cup drawing test clearly show that the proposed fracture criterion is capable of predicting the fracture initiation with sufficient accuracy considering the material anisotropy.
Liu, Yan; Cheng, H D; Huang, Jianhua; Zhang, Yingtao; Tang, Xianglong
2012-10-01
In this paper, a novel lesion segmentation within breast ultrasound (BUS) image based on the cellular automata principle is proposed. Its energy transition function is formulated based on global image information difference and local image information difference using different energy transfer strategies. First, an energy decrease strategy is used for modeling the spatial relation information of pixels. For modeling global image information difference, a seed information comparison function is developed using an energy preserve strategy. Then, a texture information comparison function is proposed for considering local image difference in different regions, which is helpful for handling blurry boundaries. Moreover, two neighborhood systems (von Neumann and Moore neighborhood systems) are integrated as the evolution environment, and a similarity-based criterion is used for suppressing noise and reducing computation complexity. The proposed method was applied to 205 clinical BUS images for studying its characteristic and functionality, and several overlapping area error metrics and statistical evaluation methods are utilized for evaluating its performance. The experimental results demonstrate that the proposed method can handle BUS images with blurry boundaries and low contrast well and can segment breast lesions accurately and effectively.
Classification of stellar populations in globular clusters
NASA Astrophysics Data System (ADS)
Wang, Yue; Zhao, Gang; Li, Hai-Ning
2017-04-01
Possessing multiple stellar populations has been accepted as a common feature of globular clusters (GCs). Different stellar populations manifest themselves with different chemical features, e.g. the well-known O-Na anti-correlation. Generally, the first (primordial) population has O and Na abundances consistent with those of field stars with similar metallicity; while the second (polluted) population is identified by their Na overabundance and O deficiency. The fraction of the populations is an important constraint on the GC formation scenario. Several methods have been proposed for the classification of GC populations. Here we examine a criterion derived based on the distribution of Galactic field stars, which relies on Na abundance as a function of [Fe/H], to distinguish first and second stellar populations in GCs. By comparing the first population fractions of 17 GCs estimated by the field star criterion with those in the literature derived by methods related to individual GCs, we find that the field star criterion tends to overestimate the first population fractions. The population separation methods, which are related to an individual GC sample, are recommended because the diversity of GCs can be taken into consideration. Currently, more caution should be exercised if one wants to regard field stars as a reference for the identification of a GC population. However, further study on the connection between field stars and GCs populations is still needed.
A systematic approach to the Kansei factors of tactile sense regarding the surface roughness.
Choi, Kyungmee; Jun, Changrim
2007-01-01
Designing products to satisfy customers' emotion requires the information gathered through the human senses, which are visual, auditory, olfactory, gustatory, or tactile senses. By controlling certain design factors, customers' emotion can be evaluated, designed, and satisfied. In this study, a systematic approach is proposed to study the tactile sense regarding the surface roughness. Numerous pairs of antonymous tactile adjectives are collected and clustered. The optimal number of adjective clusters is estimated based on the several criterion functions. The representative average preferences of the final clusters are obtained as the estimates of engineering parameters to control the surface roughness of the commercial polymer-based products.
Stochastic Games for Continuous-Time Jump Processes Under Finite-Horizon Payoff Criterion
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Qingda, E-mail: weiqd@hqu.edu.cn; Chen, Xian, E-mail: chenxian@amss.ac.cn
In this paper we study two-person nonzero-sum games for continuous-time jump processes with the randomized history-dependent strategies under the finite-horizon payoff criterion. The state space is countable, and the transition rates and payoff functions are allowed to be unbounded from above and from below. Under the suitable conditions, we introduce a new topology for the set of all randomized Markov multi-strategies and establish its compactness and metrizability. Then by constructing the approximating sequences of the transition rates and payoff functions, we show that the optimal value function for each player is a unique solution to the corresponding optimality equation andmore » obtain the existence of a randomized Markov Nash equilibrium. Furthermore, we illustrate the applications of our main results with a controlled birth and death system.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kos, L.; Tskhakaya, D. D.; Jelić, N.
2015-09-15
Recent decades have seen research into the conditions necessary for the formation of the monotonic potential shape in the sheath, appearing at the plasma boundaries like walls, in fluid, and kinetic approximations separately. Although either of these approaches yields a formulation commonly known as the much-acclaimed Bohm criterion (BC), the respective results involve essentially different physical quantities that describe the ion gas behavior. In the fluid approach, such a quantity is clearly identified as the ion directional velocity. In the kinetic approach, the ion behavior is formulated via a quantity (the squared inverse velocity averaged by the ion distribution function)more » without any clear physical significance, which is, moreover, impractical. In the present paper, we try to explain this difference by deriving a condition called here the Unified Bohm Criterion, which combines an advanced fluid model with an upgraded explicit kinetic formula in a new form of the BC. By introducing a generalized polytropic coefficient function, the unified BC can be interpreted in a form that holds, irrespective of whether the ions are described kinetically or in the fluid approximation.« less
Criterion-Referenced Test Items for Welding.
ERIC Educational Resources Information Center
Davis, Diane, Ed.
This test item bank on welding contains test questions based upon competencies found in the Missouri Welding Competency Profile. Some test items are keyed for multiple competencies. These criterion-referenced test items are designed to work with the Vocational Instructional Management System. Questions have been statistically sampled and validated…
Chen, Jinsong; Liu, Lei; Shih, Ya-Chen T; Zhang, Daowen; Severini, Thomas A
2016-03-15
We propose a flexible model for correlated medical cost data with several appealing features. First, the mean function is partially linear. Second, the distributional form for the response is not specified. Third, the covariance structure of correlated medical costs has a semiparametric form. We use extended generalized estimating equations to simultaneously estimate all parameters of interest. B-splines are used to estimate unknown functions, and a modification to Akaike information criterion is proposed for selecting knots in spline bases. We apply the model to correlated medical costs in the Medical Expenditure Panel Survey dataset. Simulation studies are conducted to assess the performance of our method. Copyright © 2015 John Wiley & Sons, Ltd.
Reconstruction of Sensory Stimuli Encoded with Integrate-and-Fire Neurons with Random Thresholds
Lazar, Aurel A.; Pnevmatikakis, Eftychios A.
2013-01-01
We present a general approach to the reconstruction of sensory stimuli encoded with leaky integrate-and-fire neurons with random thresholds. The stimuli are modeled as elements of a Reproducing Kernel Hilbert Space. The reconstruction is based on finding a stimulus that minimizes a regularized quadratic optimality criterion. We discuss in detail the reconstruction of sensory stimuli modeled as absolutely continuous functions as well as stimuli with absolutely continuous first-order derivatives. Reconstruction results are presented for stimuli encoded with single as well as a population of neurons. Examples are given that demonstrate the performance of the reconstruction algorithms as a function of threshold variability. PMID:24077610
Gromisch, Elizabeth S; Zemon, Vance; Holtzer, Roee; Chiaravalloti, Nancy D; DeLuca, John; Beier, Meghan; Farrell, Eileen; Snyder, Stacey; Schairer, Laura C; Glukhovsky, Lisa; Botvinick, Jason; Sloan, Jessica; Picone, Mary Ann; Kim, Sonya; Foley, Frederick W
2016-10-01
Cognitive dysfunction is prevalent in multiple sclerosis. As self-reported cognitive functioning is unreliable, brief objective screening measures are needed. Utilizing widely used full-length neuropsychological tests, this study aimed to establish the criterion validity of highly abbreviated versions of the Brief Visuospatial Memory Test - Revised (BVMT-R), Symbol Digit Modalities Test (SDMT), Delis-Kaplan Executive Function System (D-KEFS) Sorting Test, and Controlled Oral Word Association Test (COWAT) in order to begin developing an MS-specific screening battery. Participants from Holy Name Medical Center and the Kessler Foundation were administered one or more of these four measures. Using test-specific criterion to identify impairment at both -1.5 and -2.0 SD, receiver-operating-characteristic (ROC) analyses of BVMT-R Trial 1, Trial 2, and Trial 1 + 2 raw data (N = 286) were run to calculate the classification accuracy of the abbreviated version, as well as the sensitivity and specificity. The same methods were used for SDMT 30-s and 60-s (N = 321), D-KEFS Sorting Free Card Sort 1 (N = 120), and COWAT letters F and A (N = 298). Using these definitions of impairment, each analysis yielded high classification accuracy (89.3 to 94.3%). BVMT-R Trial 1, SDMT 30-s, D-KEFS Free Card Sort 1, and COWAT F possess good criterion validity in detecting impairment on their respective overall measure, capturing much of the same information as the full version. Along with the first two trials of the California Verbal Learning Test - Second Edition (CVLT-II), these five highly abbreviated measures may be used to develop a brief screening battery.
Toro, Brigitte; Nester, Christopher J; Farren, Pauline C
2007-03-01
To develop the construct, content, and criterion validity of the Salford Gait Tool (SF-GT) and to evaluate agreement between gait observations using the SF-GT and kinematic gait data. Tool development and comparative evaluation. University in the United Kingdom. For designing construct and content validity, convenience samples of 10 children with hemiplegic, diplegic, and quadriplegic cerebral palsy (CP) and 152 physical therapy students and 4 physical therapists were recruited. For developing criterion validity, kinematic gait data of 13 gait clusters containing 56 children with hemiplegic, diplegic, and quadriplegic CP and 11 neurologically intact children was used. For clinical evaluation, a convenience sample of 23 pediatric physical therapists participated. We developed a sagittal plane observational gait assessment tool through a series of design, test, and redesign iterations. The tool's grading system was calibrated using kinematic gait data of 13 gait clusters and was evaluated by comparing the agreement of gait observations using the SF-GT with kinematic gait data. Criterion standard kinematic gait data. There was 58% mean agreement based on grading categories and 80% mean agreement based on degree estimations evaluated with the least significant difference method. The new SF-GT has good concurrent criterion validity.
A Joint Optimization Criterion for Blind DS-CDMA Detection
NASA Astrophysics Data System (ADS)
Durán-Díaz, Iván; Cruces-Alvarez, Sergio A.
2006-12-01
This paper addresses the problem of the blind detection of a desired user in an asynchronous DS-CDMA communications system with multipath propagation channels. Starting from the inverse filter criterion introduced by Tugnait and Li in 2001, we propose to tackle the problem in the context of the blind signal extraction methods for ICA. In order to improve the performance of the detector, we present a criterion based on the joint optimization of several higher-order statistics of the outputs. An algorithm that optimizes the proposed criterion is described, and its improved performance and robustness with respect to the near-far problem are corroborated through simulations. Additionally, a simulation using measurements on a real software-radio platform at 5 GHz has also been performed.
Computation of Anisotropic Bi-Material Interfacial Fracture Parameters and Delamination Creteria
NASA Technical Reports Server (NTRS)
Chow, W-T.; Wang, L.; Atluri, S. N.
1998-01-01
This report documents the recent developments in methodologies for the evaluation of the integrity and durability of composite structures, including i) the establishment of a stress-intensity-factor based fracture criterion for bimaterial interfacial cracks in anisotropic materials (see Sec. 2); ii) the development of a virtual crack closure integral method for the evaluation of the mixed-mode stress intensity factors for a bimaterial interfacial crack (see Sec. 3). Analytical and numerical results show that the proposed fracture criterion is a better fracture criterion than the total energy release rate criterion in the characterization of the bimaterial interfacial cracks. The proposed virtual crack closure integral method is an efficient and accurate numerical method for the evaluation of mixed-mode stress intensity factors.
Revision of the criterion to avoid electron heating during laser aided plasma diagnostics (LAPD)
NASA Astrophysics Data System (ADS)
Carbone, E. A. D.; Palomares, J. M.; Hübner, S.; Iordanova, E.; van der Mullen, J. J. A. M.
2012-01-01
A criterion is given for the laser fluency (in J/m2) such that, when satisfied, disturbance of the plasma by the laser is avoided. This criterion accounts for laser heating of the electron gas intermediated by electron-ion (ei) and electron-atom (ea) interactions. The first heating mechanism is well known and was extensively dealt with in the past. The second is often overlooked but of importance for plasmas of low degree of ionization. It is especially important for cold atmospheric plasmas, plasmas that nowadays stand in the focus of attention. The new criterion, based on the concerted action of both ei and ea interactions is validated by Thomson scattering experiments performed on four different plasmas.
Robust Criterion for the Existence of Nonhyperbolic Ergodic Measures
NASA Astrophysics Data System (ADS)
Bochi, Jairo; Bonatti, Christian; Díaz, Lorenzo J.
2016-06-01
We give explicit C 1-open conditions that ensure that a diffeomorphism possesses a nonhyperbolic ergodic measure with positive entropy. Actually, our criterion provides the existence of a partially hyperbolic compact set with one-dimensional center and positive topological entropy on which the center Lyapunov exponent vanishes uniformly. The conditions of the criterion are met on a C 1-dense and open subset of the set of diffeomorphisms having a robust cycle. As a corollary, there exists a C 1-open and dense subset of the set of non-Anosov robustly transitive diffeomorphisms consisting of systems with nonhyperbolic ergodic measures with positive entropy. The criterion is based on a notion of a blender defined dynamically in terms of strict invariance of a family of discs.
Brinkman, Willem M; Luursema, Jan-Maarten; Kengen, Bas; Schout, Barbara M A; Witjes, J Alfred; Bekkers, Ruud L
2013-03-01
To answer 2 research questions: what are the learning curve patterns of novices on the da Vinci skills simulator parameters and what parameters are appropriate for criterion-based robotic training. A total of 17 novices completed 2 simulator sessions within 3 days. Each training session consisted of a warming-up exercise, followed by 5 repetitions of the "ring and rail II" task. Expert participants (n = 3) performed a warming-up exercise and 3 repetitions of the "ring and rail II" task on 1 day. We analyzed all 9 parameters of the simulator. Significant learning occurred on 5 parameters: overall score, time to complete, instrument collision, instruments out of view, and critical errors within 1-10 repetitions (P <.05). Economy of motion and excessive instrument force only showed improvement within the first 5 repetitions. No significant learning on the parameter drops and master workspace range was found. Using the expert overall performance score (n = 3) as a criterion (overall score 90%), 9 of 17 novice participants met the criterion within 10 repetitions. Most parameters showed that basic robotic skills are learned relatively quickly using the da Vinci skills simulator, but that 10 repetitions were not sufficient for most novices to reach an expert level. Some parameters seemed inappropriate for expert-based criterion training because either no learning occurred or the novice performance was equal to expert performance. Copyright © 2013 Elsevier Inc. All rights reserved.
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper. PMID:25784928
Ye, Zhiwei; Wang, Mingwei; Hu, Zhengbing; Liu, Wei
2015-01-01
Image enhancement is an important procedure of image processing and analysis. This paper presents a new technique using a modified measure and blending of cuckoo search and particle swarm optimization (CS-PSO) for low contrast images to enhance image adaptively. In this way, contrast enhancement is obtained by global transformation of the input intensities; it employs incomplete Beta function as the transformation function and a novel criterion for measuring image quality considering three factors which are threshold, entropy value, and gray-level probability density of the image. The enhancement process is a nonlinear optimization problem with several constraints. CS-PSO is utilized to maximize the objective fitness criterion in order to enhance the contrast and detail in an image by adapting the parameters of a novel extension to a local enhancement technique. The performance of the proposed method has been compared with other existing techniques such as linear contrast stretching, histogram equalization, and evolutionary computing based image enhancement methods like backtracking search algorithm, differential search algorithm, genetic algorithm, and particle swarm optimization in terms of processing time and image quality. Experimental results demonstrate that the proposed method is robust and adaptive and exhibits the better performance than other methods involved in the paper.
A Thomistic defense of whole-brain death
Eberl, Jason T.
2015-01-01
Michel Accad critiques the currently accepted whole-brain criterion for determining the death of a human being from a Thomistic metaphysical perspective and, in so doing, raises objections to a particular argument defending the whole-brain criterion by Patrick Lee and Germain Grisez. In this paper, I will respond to Accad's critique of the whole-brain criterion and defend its continued validity as a criterion for determining when a human being's death has occurred in accord with Thomistic metaphysical principles. I will, however, join Accad in criticizing Lee and Grisez's proposed defense of the whole-brain criterion as potentially leading to erroneous conclusions regarding the determination of human death. Lay summary: Catholic physicians and bioethicists currently debate the legally accepted clinical standard for determining when a human being has died—known as the “wholebrain criterion”—which has also been morally affirmed by the Magisterium. This paper responds to physician Michel Accad’s critique of the whole-brain criterion based upon St. Thomas Aquinas’s metaphysical account of human nature as a union of a rational soul and a material body. I defend the whole-brain criterion from the same Thomistic philosophical perspective, while agreeing with Accad’s objection to an alternative Thomistic defense of whole-brain death by philosophers Patrick Lee and Germain Grisez. PMID:26912933
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Bürger, W; Streibelt, M
2015-02-01
Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.
Paz, Sylvia H; Spritzer, Karen L; Morales, Leo S; Hays, Ron D
2013-03-29
To evaluate the equivalence of the PROMIS® wave 1 physical functioning item bank, by age (50 years or older versus 18-49). A total of 114 physical functioning items with 5 response choices were administered to English- (n=1504) and Spanish-language (n=640) adults. Item frequencies, means and standard deviations, item-scale correlations, and internal consistency reliability were estimated. Differential Item Functioning (DIF) by age was evaluated. Thirty of the 114 items were fagged for DIF based on an R-squared of 0.02 or above criterion. The expected total score was higher for those respondents who were 18-49 than those who were 50 or older. Those who were 50 years or older versus 18-49 years old with the same level of physical functioning responded differently to 30 of the 114 items in the PROMIS® physical functioning item bank. This study yields essential information about the equivalence of the physical functioning items in older versus younger individuals.
Criteria to Evaluate Interpretive Guides for Criterion-Referenced Tests
ERIC Educational Resources Information Center
Trapp, William J.
2007-01-01
This project provides a list of criteria for which the contents of interpretive guides written for customized, criterion-referenced tests can be evaluated. The criteria are based on the "Standards for Educational and Psychological Testing" (1999) and examine the content breadth of interpretive guides. Interpretive guides written for…
Criterion-Referenced Test Items for Small Engines.
ERIC Educational Resources Information Center
Herd, Amon
This notebook contains criterion-referenced test items for testing students' knowledge of small engines. The test items are based upon competencies found in the Missouri Small Engine Competency Profile. The test item bank is organized in 18 sections that cover the following duties: shop procedures; tools and equipment; fasteners; servicing fuel…
ERIC Educational Resources Information Center
Fidler, James R.
1993-01-01
Criterion-related validities of 2 laboratory practitioner certification examinations for medical technologists (MTs) and medical laboratory technicians (MLTs) were assessed for 81 MT and 70 MLT examinees. Validity coefficients are presented for both measures. Overall, summative ratings yielded stronger validity coefficients than ratings based on…
Two related numerical codes, 3DFEMWATER and 3DLEWASTE, are presented sed to delineate wellhead protection areas in agricultural regions using the assimilative capacity criterion. DFEMWATER (Three-dimensional Finite Element Model of Water Flow Through Saturated-Unsaturated Media) ...
Standards and Criteria. Paper #10 in Occasional Paper Series.
ERIC Educational Resources Information Center
Glass, Gene V.
The logical and psychological bases for setting cutting scores for criterion-referenced tests are examined; they are found to be intrinsically arbitrary and are often examples of misdirected precision and axiomatization. The term, criterion referenced, originally referred to a technique for making test scores meaningful by controlling the test…
Manitoba Schools Fitness 1989.
ERIC Educational Resources Information Center
Manitoba Dept. of Education, Winnipeg.
This manual outlines physical fitness tests that may be used in the schools. The tests are based on criterion standards which indicate the levels of achievement at which health risk factors may be reduced. Test theory, protocols, and criterion charts are presented for: (1) muscle strength and endurance, (2) body composition, (3) flexibility, and…
Exploring DSM-5 criterion A in Acute Stress Disorder symptoms following natural disaster.
Lavenda, Osnat; Grossman, Ephraim S; Ben-Ezra, Menachem; Hoffman, Yaakov
2017-10-01
The present study examines the DSM-5 Acute Stress Disorder (ASD) diagnostic criteria of exposure, in the context of a natural disaster. The study is based on the reports of 1001 Filipinos following the aftermath of super typhoon Haiyan in 2013. Participants reported exposure to injury, psychological distress and ASD symptoms. Findings indicated the association of criterion A with the prevalence of meeting all other ASD diagnostic criteria and high psychological distress. The diagnostic properties of Criterion A are discussed. Copyright © 2017 Elsevier B.V. All rights reserved.
Evaluation of a Progressive Failure Analysis Methodology for Laminated Composite Structures
NASA Technical Reports Server (NTRS)
Sleight, David W.; Knight, Norman F., Jr.; Wang, John T.
1997-01-01
A progressive failure analysis methodology has been developed for predicting the nonlinear response and failure of laminated composite structures. The progressive failure analysis uses C plate and shell elements based on classical lamination theory to calculate the in-plane stresses. Several failure criteria, including the maximum strain criterion, Hashin's criterion, and Christensen's criterion, are used to predict the failure mechanisms. The progressive failure analysis model is implemented into a general purpose finite element code and can predict the damage and response of laminated composite structures from initial loading to final failure.
Parry, A O; Rascón, C; Willis, G; Evans, R
2014-09-03
We study the density-density correlation function G(r, r') in the interfacial region of a fluid (or Ising-like magnet) with short-ranged interactions using square gradient density functional theory. Adopting a simple double parabola approximation for the bulk free-energy density, we first show that the parallel Fourier transform G(z, z'; q) and local structure factor S(z; q) separate into bulk and excess contributions. We attempt to account for both contributions by deriving an interfacial Hamiltonian, characterised by a wavevector dependent surface tension σ(q), and then reconstructing density correlations from correlations in the interface position. We show that the standard crossing criterion identification of the interface, as a surface of fixed density (or magnetization), does not explain the separation of G(z, z'; q) and the form of the excess contribution. We propose an alternative definition of the interface position based on the properties of correlations between points that 'float' with the surface and show that this describes the full q and z dependence of the excess contributions to both G and S. However, neither the 'crossing-criterion' nor the new 'floating interface' definition of σ(q) are quantities directly measurable from the total structure factor S(tot)(q) which contains additional q dependence arising from the non-local relation between fluctuations in the interfacial position and local density. Since it is the total structure factor that is measured experimentally or in simulations, our results have repercussions for earlier attempts to extract and interpret σ(q).
Do Right- and Left-Handed Monkeys Differ on Cognitive Measures?
NASA Technical Reports Server (NTRS)
Hopkins, William D.; Washburn, David A.
1994-01-01
Twelve left- and 14 right-handed monkeys were compared on 6 measures of cognitive performance (2 maze-solving tasks, matching-to-sample, delayed matching-to-sample, delayed response using spatial cues, and delayed response using form cues). The dependent variable was trials-to-training criterion for each of the 6 tasks. Significant differences were found between left- and right-handed monkeys on the 2 versions of the delayed response task. Right-handed monkeys reached criterion significantly faster on the form cue version of the task, whereas left-handed monkeys reached criterion significantly faster on delayed response for spatial position (p less than .05). The results suggest that sensitive hand preference measures of laterality can reveal differences in cognitive performance, which in turn may reflect underlying laterality in functional organization of the nervous system.
Linear discriminant analysis based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu
2013-08-01
Linear discriminant analysis (LDA) is a well-known dimensionality reduction technique, which is widely used for many purposes. However, conventional LDA is sensitive to outliers because its objective function is based on the distance criterion using L2-norm. This paper proposes a simple but effective robust LDA version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based between-class dispersion and the L1-norm-based within-class dispersion. The proposed method is theoretically proved to be feasible and robust to outliers while overcoming the singular problem of the within-class scatter matrix for conventional LDA. Experiments on artificial datasets, standard classification datasets and three popular image databases demonstrate the efficacy of the proposed method.
Lee, C Ellen; Warden, Stuart J; Szuck, Beth; Lau, Y K James
2016-08-01
The aim of this study was to examine the effects of a 6-week community-based physical activity (PA) intervention on physical function-related risk factors for falls among 56 breast cancer survivors (BCS) who had completed treatments. This was a single-group longitudinal study. The multimodal PA intervention included aerobic, strengthening, and balance components. Physical function outcomes based on the 4-meter walk, chair stand, one-leg stance, tandem walk, and dynamic muscular endurance tests were assessed at 6-week pre-intervention (T1), baseline (T2), and post-intervention (T3). T1 to T2 and T2 to T3 were the control and intervention periods, respectively. All outcomes, except the tandem walk test, significantly improved after the intervention period (P < 0.05), with no change detected after the control period (P > 0.05). Based on the falls risk criterion in the one-leg stance test, the proportion at risk for falls was significantly lower after the intervention period (P = 0.04), but not after the control period. A community-based multimodal PA intervention for BCS may be efficacious in improving physical function-related risk factors for falls, and lowering the proportion of BCS at risk for falls based on specific physical function-related falls criteria. Further larger trials are needed to confirm these preliminary findings.
Analysis of non locality proofs in Quantum Mechanics
NASA Astrophysics Data System (ADS)
Nisticò, Giuseppe
2012-02-01
Two kinds of non-locality theorems in Quantum Mechanics are taken into account: the theorems based on the criterion of reality and the quite different theorem proposed by Stapp. In the present work the analyses of the theorem due to Greenberger, Horne, Shimony and Zeilinger, based on the criterion of reality, and of Stapp's argument are shown. The results of these analyses show that the alleged violations of locality cannot be considered definitive.
Risk-based containment and air monitoring criteria for work with dispersible radioactive materials.
Veluri, Venkateswara Rao; Justus, Alan L
2013-04-01
This paper presents readily understood, technically defensible, risk-based containment and air monitoring criteria, which are developed from fundamental physical principles. The key for the development of each criterion was the use of a calculational de minimis level, in this case chosen to be 100 mrem (or 40 DAC-h). Examples are provided that demonstrate the effective use of each criterion. Comparison to other often used criteria is provided.
Scale-invariant entropy-based theory for dynamic ordering
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mahulikar, Shripad P., E-mail: spm@iitmandi.ac.in, E-mail: spm@aero.iitb.ac.in; Department of Aerospace Engineering, Indian Institute of Technology Bombay, Mumbai 400076; Kumari, Priti
2014-09-01
Dynamically Ordered self-organized dissipative structure exists in various forms and at different scales. This investigation first introduces the concept of an isolated embedding system, which embeds an open system, e.g., dissipative structure and its mass and/or energy exchange with its surroundings. Thereafter, scale-invariant theoretical analysis is presented using thermodynamic principles for Order creation, existence, and destruction. The sustainability criterion for Order existence based on its structured mass and/or energy interactions with the surroundings is mathematically defined. This criterion forms the basis for the interrelationship of physical parameters during sustained existence of dynamic Order. It is shown that the sufficient conditionmore » for dynamic Order existence is approached if its sustainability criterion is met, i.e., its destruction path is blocked. This scale-invariant approach has the potential to unify the physical understanding of universal dynamic ordering based on entropy considerations.« less
Determination of babbit mechanical properties based on tin under static and cyclic loading
NASA Astrophysics Data System (ADS)
Zernin, M. V.
2018-03-01
Based on the results of studies of babbitt on the basis of tin under static loading under three types of stress state, the parameters of the criterion for the equivalence of stressed states were refined and a single diagram of the babbitt deformation was obtained. It is shown that the criterion of equivalence for static loading should contain the first principal stress and stress intensity. With cyclic loading, the first main voltage can be used as a criterion. The stages of development of fatigue cracks are described and it is logical to use a statistical approach to reveal the boundary of the transition from short cracks to macrocracks, based on a significant difference in the characteristics of the dispersion of the crack speeds at these two stages. The results of experimental studies of the cyclic crack resistance of babbitt are presented and the parameters of this boundary are obtained.
A criterion for establishing life limits. [for Space Shuttle Main Engine service
NASA Technical Reports Server (NTRS)
Skopp, G. H.; Porter, A. A.
1990-01-01
The development of a rigorous statistical method that would utilize hardware-demonstrated reliability to evaluate hardware capability and provide ground rules for safe flight margin is discussed. A statistical-based method using the Weibull/Weibayes cumulative distribution function is described. Its advantages and inadequacies are pointed out. Another, more advanced procedure, Single Flight Reliability (SFR), determines a life limit which ensures that the reliability of any single flight is never less than a stipulated value at a stipulated confidence level. Application of the SFR method is illustrated.
A new criterion for predicting rolling-element fatigue lives of through-hardened steels
NASA Technical Reports Server (NTRS)
Chevalier, J. L.; Zaretsky, E. V.; Parker, R. J.
1972-01-01
A carbide factor was derived based upon a statistical analysis which related rolling-element fatigue life to the total number of residual carbide particles per unit area, median residual carbide size, and percent residual carbide area. An equation was experimentally determined which predicts material hardness as a function of temperature. The limiting temperatures of all of the materials studied were dependent on initial room temperature hardness and tempering temperature. An equation was derived combining the effects of material hardness, carbide factor, and bearing temperature to predict rolling-element bearing life.
Duration ratio discrimination in pigeons: a criterion-setting analysis.
Fetterman, J Gregor
2006-02-28
Pigeons received trials beginning with a sequence of two colors (blue-->yellow) on the center key of a three-key array. The colors lasted different lengths of time. At the end of the sequence pigeons chose between two keys based on a criterial ratio of the temporal sequence. One choice was reinforced if the time ratio was less than the criterion and the alternate choice was reinforced if the time ratio was greater than the criterion. The criterial ratios (first to second duration) were 1:1, 1.5:1, and 3:1. The same set of intervals was used for the different criterion ratios, producing a balanced distribution of time ratios for the 1.5:1 condition, and unbalanced distributions for the 1:1 and 3:1 conditions. That is, for the 1.5:1 condition half of the duration pairs were less than the criterion and half were greater. However, for the 1:1 and 3:1 conditions, more duration pairs were less than (3:1) or greater than (1:1) the criterion. Accuracy was similar across criterion ratios, but response bias was influenced by the asymmetries of time ratios in the 1:1 and 3:1 conditions. When these asymmetries were controlled, the response biases were reduced or eliminated. These results indicate that pigeons are flexible in establishing a criterion for discriminating duration ratios, unlike humans, who are less flexible and are bound to categorical distinctions in the discrimination of duration ratios.
A consensus-based gold standard for the evaluation of mass casualty triage systems.
Lerner, E Brooke; McKee, Courtney H; Cady, Charles E; Cone, David C; Colella, M Riccardo; Cooper, Arthur; Coule, Phillip L; Lairet, Julio R; Liu, J Marc; Pirrallo, Ronald G; Sasser, Scott M; Schwartz, Richard; Shepherd, Greene; Swienton, Raymond E
2015-01-01
Accuracy and effectiveness analyses of mass casualty triage systems are limited because there are no gold standard definitions for each of the triage categories. Until there is agreement on which patients should be identified by each triage category, it will be impossible to calculate sensitivity and specificity or to compare accuracy between triage systems. To develop a consensus-based, functional gold standard definition for each mass casualty triage category. National experts were recruited through the lead investigators' contacts and their suggested contacts. Key informant interviews were conducted to develop a list of potential criteria for defining each triage category. Panelists were interviewed in order of their availability until redundancy of themes was achieved. Panelists were blinded to each other's responses during the interviews. A modified Delphi survey was developed with the potential criteria identified during the interview and delivered to all recruited experts. In the early rounds, panelists could add, remove, or modify criteria. In the final rounds edits were made to the criteria until at least 80% agreement was achieved. Thirteen national and local experts were recruited to participate in the project. Six interviews were conducted. Three rounds of voting were performed, with 12 panelists participating in the first round, 12 in the second round, and 13 in the third round. After the first two rounds, the criteria were modified according to respondent suggestions. In the final round, over 90% agreement was achieved for all but one criterion. A single e-mail vote was conducted on edits to the final criterion and consensus was achieved. A consensus-based, functional gold standard definition for each mass casualty triage category was developed. These gold standard definitions can be used to evaluate the accuracy of mass casualty triage systems after an actual incident, during training, or for research.
Ignition criterion for heterogeneous energetic materials based on hotspot size-temperature threshold
NASA Astrophysics Data System (ADS)
Barua, A.; Kim, S.; Horie, Y.; Zhou, M.
2013-02-01
A criterion for the ignition of granular explosives (GXs) and polymer-bonded explosives (PBXs) under shock and non-shock loading is developed. The formulation is based on integration of a quantification of the distributions of the sizes and locations of hotspots in loading events using a cohesive finite element method (CFEM) developed recently and the characterization by Tarver et al. [C. M. Tarver et al., "Critical conditions for impact- and shock-induced hot spots in solid explosives," J. Phys. Chem. 100, 5794-5799 (1996)] of the critical size-temperature threshold of hotspots required for chemical ignition of solid explosives. The criterion, along with the CFEM capability to quantify the thermal-mechanical behavior of GXs and PBXs, allows the critical impact velocity for ignition, time to ignition, and critical input energy at ignition to be determined as functions of material composition, microstructure, and loading conditions. The applicability of the relation between the critical input energy (E) and impact velocity of James [H. R. James, "An extension to the critical energy criterion used to predict shock initiation thresholds," Propellants, Explos., Pyrotech. 21, 8-13 (1996)] for shock loading is examined, leading to a modified interpretation, which is sensitive to microstructure and loading condition. As an application, numerical studies are undertaken to evaluate the ignition threshold of granular high melting point eXplosive, octahydro-1,3,5,7-tetranitro-1,2,3,5-tetrazocine (HMX) and HMX/Estane PBX under loading with impact velocities up to 350 ms-1 and strain rates up to 105 s-1. Results show that, for the GX, the time to criticality (tc) is strongly influenced by initial porosity, but is insensitive to grain size. Analyses also lead to a quantification of the differences between the responses of the GXs and PBXs in terms of critical impact velocity for ignition, time to ignition, and critical input energy at ignition. Since the framework permits explicit tracking of the influences of microstructure, loading, and mechanical constraints, the calculations also show the effects of stress wave reflection and confinement condition on the ignition behaviors of GXs and PBXs.
Formalization of the engineering science discipline - knowledge engineering
NASA Astrophysics Data System (ADS)
Peng, Xiao
Knowledge is the most precious ingredient facilitating aerospace engineering research and product development activities. Currently, the most common knowledge retention methods are paper-based documents, such as reports, books and journals. However, those media have innate weaknesses. For example, four generations of flying wing aircraft (Horten, Northrop XB-35/YB-49, Boeing BWB and many others) were mostly developed in isolation. The subsequent engineers were not aware of the previous developments, because these projects were documented such which prevented the next generation of engineers to benefit from the previous lessons learned. In this manner, inefficient knowledge retention methods have become a primary obstacle for knowledge transfer from the experienced to the next generation of engineers. In addition, the quality of knowledge itself is a vital criterion; thus, an accurate measure of the quality of 'knowledge' is required. Although qualitative knowledge evaluation criteria have been researched in other disciplines, such as the AAA criterion by Ernest Sosa stemming from the field of philosophy, a quantitative knowledge evaluation criterion needs to be developed which is capable to numerically determine the qualities of knowledge for aerospace engineering research and product development activities. To provide engineers with a high-quality knowledge management tool, the engineering science discipline Knowledge Engineering has been formalized to systematically address knowledge retention issues. This research undertaking formalizes Knowledge Engineering as follows: 1. Categorize knowledge according to its formats and representations for the first time, which serves as the foundation for the subsequent knowledge management function development. 2. Develop an efficiency evaluation criterion for knowledge management by analyzing the characteristics of both knowledge and the parties involved in the knowledge management processes. 3. Propose and develop an innovative Knowledge-Based System (KBS), AVD KBS, forming a systematic approach facilitating knowledge management. 4. Demonstrate the efficiency advantages of AVDKBS over traditional knowledge management methods via selected design case studies. This research formalizes, for the first time, Knowledge Engineering as a distinct discipline by delivering a robust and high-quality knowledge management and process tool, AVDKBS. Formalizing knowledge proves to significantly impact the effectiveness of aerospace knowledge retention and utilization.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Mebane, C.A.
2010-01-01
Criteria to protect aquatic life are intended to protect diverse ecosystems, but in practice are usually developed from compilations of single-species toxicity tests using standard test organisms that were tested in laboratory environments. Species sensitivity distributions (SSDs) developed from these compilations are extrapolated to set aquatic ecosystem criteria. The protectiveness of the approach was critically reviewed with a chronic SSD for cadmium comprising 27 species within 21 genera. Within the data set, one genus had lower cadmium effects concentrations than the SSD fifth percentile-based criterion, so in theory this genus, the amphipod Hyalella, could be lost or at least allowed some level of harm by this criteria approach. However, population matrix modeling projected only slightly increased extinction risks for a temperate Hyalella population under scenarios similar to the SSD fifth percentile criterion. The criterion value was further compared to cadmium effects concentrations in ecosystem experiments and field studies. Generally, few adverse effects were inferred from ecosystem experiments at concentrations less than the SSD fifth percentile criterion. Exceptions were behavioral impairments in simplified food web studies. No adverse effects were apparent in field studies under conditions that seldom exceeded the criterion. At concentrations greater than the SSD fifth percentile, the magnitudes of adverse effects in the field studies were roughly proportional to the laboratory-based fraction of species with adverse effects in the SSD. Overall, the modeling and field validation comparisons of the chronic criterion values generally supported the relevance and protectiveness of the SSD fifth percentile approach with cadmium. ?? 2009 Society for Risk Analysis.
Filamentary and hierarchical pictures - Kinetic energy criterion
NASA Technical Reports Server (NTRS)
Klypin, Anatoly A.; Melott, Adrian L.
1992-01-01
We present a new criterion for formation of second-generation filaments. The criterion called the kinetic energy ratio, KR, is based on comparison of peculiar velocities at different scales. We suggest that the clumpiness of the distribution in some cases might be less important than the 'coldness' or 'hotness' of the flow for formation of coherent structures. The kinetic energy ratio is analogous to the Mach number except for one essential difference. If at some scale KR is greater than 1, as estimated at the linear stage, then when fluctuations of this scale reach nonlinearity, the objects they produce must be anisotropic ('filamentary'). In the case of power-law initial spectra the kinetic ratio criterion suggests that the border line is the power-spectrum with the slope n = -1.
NASA Astrophysics Data System (ADS)
Rashidi Moghaddam, M.; Ayatollahi, M. R.; Berto, F.
2018-01-01
The values of mode II fracture toughness reported in the literature for several rocks are studied theoretically by using a modified criterion based on strain energy density averaged over a control volume around the crack tip. The modified criterion takes into account the effect of T-stress in addition to the singular terms of stresses/strains. The experimental results are related to mode II fracture tests performed on the semicircular bend and Brazilian disk specimens. There are good agreements between theoretical predictions using the generalized averaged strain energy density criterion and the experimental results. The theoretical results reveal that the value of mode II fracture toughness is affected by the size of control volume around the crack tip and also the magnitude and sign of T-stress.
Code of Federal Regulations, 2011 CFR
2011-04-01
... residual functional capacity is not the same as the corresponding criterion of a rule. In these instances... economy. Appendix 2 provides rules using this data reflecting major functional and vocational patterns. We...
7 CFR 1940.592 - Community facilities grants.
Code of Federal Regulations, 2010 CFR
2010-01-01
...). (b) Basic formula criteria, data source, and weight. See § 1940.552(b). (1) The criteria used in the... percentage of National rural population with income below the poverty level—50 percent. (2) Data source for each of these criterion is based on the latest census data available. Each criterion is assigned a...
Criterion-Referenced Test Items for Auto Body.
ERIC Educational Resources Information Center
Tannehill, Dana, Ed.
This test item bank on auto body repair contains criterion-referenced test questions based upon competencies found in the Missouri Auto Body Competency Profile. Some test items are keyed for multiple competencies. The tests cover the following 26 competency areas in the auto body curriculum: auto body careers; measuring and mixing; tools and…
76 FR 16250 - Planning Resource Adequacy Assessment Reliability Standard
Federal Register 2010, 2011, 2012, 2013, 2014
2011-03-23
..., to utilize a ``one day in ten years'' loss of load criterion, and to document and post load and...'' loss of load criterion, and to document and post load and resource capability in each area or..., based on ``one day in ten years'' loss of load expectation principles, for the analysis, assessment and...
Evaluation of Weighted Scale Reliability and Criterion Validity: A Latent Variable Modeling Approach
ERIC Educational Resources Information Center
Raykov, Tenko
2007-01-01
A method is outlined for evaluating the reliability and criterion validity of weighted scales based on sets of unidimensional measures. The approach is developed within the framework of latent variable modeling methodology and is useful for point and interval estimation of these measurement quality coefficients in counseling and education…
Criterion-Referenced Job Proficiency Testing: A Large Scale Application. Research Report 1193.
ERIC Educational Resources Information Center
Maier, Milton H.; Hirshfeld, Stephen F.
The Army Skill Qualification Tests (SQT's) were designed to determine levels of competence in performance of the tasks crucial to an enlisted soldier's occupational specialty. SQT's are performance-based, criterion-referenced measures which offer two advantages over traditional proficiency and achievement testing programs: test content can be made…
An enhanced version of a bone-remodelling model based on the continuum damage mechanics theory.
Mengoni, M; Ponthot, J P
2015-01-01
The purpose of this work was to propose an enhancement of Doblaré and García's internal bone remodelling model based on the continuum damage mechanics (CDM) theory. In their paper, they stated that the evolution of the internal variables of the bone microstructure, and its incidence on the modification of the elastic constitutive parameters, may be formulated following the principles of CDM, although no actual damage was considered. The resorption and apposition criteria (similar to the damage criterion) were expressed in terms of a mechanical stimulus. However, the resorption criterion is lacking a dimensional consistency with the remodelling rate. We propose here an enhancement to this resorption criterion, insuring the dimensional consistency while retaining the physical properties of the original remodelling model. We then analyse the change in the resorption criterion hypersurface in the stress space for a two-dimensional (2D) analysis. We finally apply the new formulation to analyse the structural evolution of a 2D femur. This analysis gives results consistent with the original model but with a faster and more stable convergence rate.
A Study of Shared-Memory Mutual Exclusion Protocols Using CADP
NASA Astrophysics Data System (ADS)
Mateescu, Radu; Serwe, Wendelin
Mutual exclusion protocols are an essential building block of concurrent systems: indeed, such a protocol is required whenever a shared resource has to be protected against concurrent non-atomic accesses. Hence, many variants of mutual exclusion protocols exist in the shared-memory setting, such as Peterson's or Dekker's well-known protocols. Although the functional correctness of these protocols has been studied extensively, relatively little attention has been paid to their non-functional aspects, such as their performance in the long run. In this paper, we report on experiments with the performance evaluation of mutual exclusion protocols using Interactive Markov Chains. Steady-state analysis provides an additional criterion for comparing protocols, which complements the verification of their functional properties. We also carefully re-examined the functional properties, whose accurate formulation as temporal logic formulas in the action-based setting turns out to be quite involved.
Analysis of Photothermal Characterization of Layered Materials: Design of Optimal Experiments
NASA Technical Reports Server (NTRS)
Cole, Kevin D.
2003-01-01
In this paper numerical calculations are presented for the steady-periodic temperature in layered materials and functionally-graded materials to simulate photothermal methods for the measurement of thermal properties. No laboratory experiments were performed. The temperature is found from a new Green s function formulation which is particularly well-suited to machine calculation. The simulation method is verified by comparison with literature data for a layered material. The method is applied to a class of two-component functionally-graded materials and results for temperature and sensitivity coefficients are presented. An optimality criterion, based on the sensitivity coefficients, is used for choosing what experimental conditions will be needed for photothermal measurements to determine the spatial distribution of thermal properties. This method for optimal experiment design is completely general and may be applied to any photothermal technique and to any functionally-graded material.
Morphing continuum theory for turbulence: Theory, computation, and visualization.
Chen, James
2017-10-01
A high order morphing continuum theory (MCT) is introduced to model highly compressible turbulence. The theory is formulated under the rigorous framework of rational continuum mechanics. A set of linear constitutive equations and balance laws are deduced and presented from the Coleman-Noll procedure and Onsager's reciprocal relations. The governing equations are then arranged in conservation form and solved through the finite volume method with a second-order Lax-Friedrichs scheme for shock preservation. A numerical example of transonic flow over a three-dimensional bump is presented using MCT and the finite volume method. The comparison shows that MCT-based direct numerical simulation (DNS) provides a better prediction than Navier-Stokes (NS)-based DNS with less than 10% of the mesh number when compared with experiments. A MCT-based and frame-indifferent Q criterion is also derived to show the coherent eddy structure of the downstream turbulence in the numerical example. It should be emphasized that unlike the NS-based Q criterion, the MCT-based Q criterion is objective without the limitation of Galilean invariance.
Jacobs, Jeremy M.; Evanson, J. Richard; Pniewski, Josh; Dickston, Michelle L.; Mueller, Terry; Bojescul, John A.
2017-01-01
Introduction Hip arthroscopy allows surgeons to address intra-articular pathology of the hip while avoiding more invasive open surgical dislocation. However the post-operative rehabilitation protocols have varied greatly in the literature, with many having prolonged periods of limited motion and weight bearing. Purpose The purpose of this study was to describe a criterion-based early weight bearing protocol following hip arthroscopy and investigate functional outcomes in the subjects who were active duty military. Methods Active duty personnel undergoing hip arthroscopy for symptomatic femoroacetabular impingement were prospectively assessed in a controlled environment for the ability to incorporate early postoperative weight-bearing with the following criteria: no increased pain complaint with weight bearing and normalized gait pattern. Modified Harris Hip (HHS) and Hip Outcome score (HOS) were performed preoperatively and at six months post-op. Participants were progressed with a standard hip arthroscopy protocol. Hip flexion was limited to not exceed 90 degrees for the first three weeks post-op, with progression back to running beginning at three months. Final discharge was dependent upon the ability to run two miles at military specified pace and do a single leg broad jump within six inches of the contralateral leg without an increase in pain. Results Eleven participants met inclusion criteria over the study period. Crutch use was discontinued at an average of five days following surgery based on established weight bearing criteria. Only one participant required continued crutch use at 15 days. Participants’ functional outcome was improved postoperatively, as demonstrated by significant increases in HOS and HHS. At the six month follow up, eight of 11 participants were able to take and complete a full Army Physical Fitness Test. Conclusions Following completion of the early weight bearing rehabilitation protocol, 81% of participants were able to progress to full weight bearing by four days post-operative, with normalized pain-free gait patterns. Active duty personnel utilizing an early weight bearing protocol following hip arthroscopy demonstrated significant functional improvement at six months. Level of Evidence Level 4, Case-series PMID:29181261
Shaw, K Aaron; Jacobs, Jeremy M; Evanson, J Richard; Pniewski, Josh; Dickston, Michelle L; Mueller, Terry; Bojescul, John A
2017-10-01
Hip arthroscopy allows surgeons to address intra-articular pathology of the hip while avoiding more invasive open surgical dislocation. However the post-operative rehabilitation protocols have varied greatly in the literature, with many having prolonged periods of limited motion and weight bearing. The purpose of this study was to describe a criterion-based early weight bearing protocol following hip arthroscopy and investigate functional outcomes in the subjects who were active duty military. Active duty personnel undergoing hip arthroscopy for symptomatic femoroacetabular impingement were prospectively assessed in a controlled environment for the ability to incorporate early postoperative weight-bearing with the following criteria: no increased pain complaint with weight bearing and normalized gait pattern. Modified Harris Hip (HHS) and Hip Outcome score (HOS) were performed preoperatively and at six months post-op. Participants were progressed with a standard hip arthroscopy protocol. Hip flexion was limited to not exceed 90 degrees for the first three weeks post-op, with progression back to running beginning at three months. Final discharge was dependent upon the ability to run two miles at military specified pace and do a single leg broad jump within six inches of the contralateral leg without an increase in pain. Eleven participants met inclusion criteria over the study period. Crutch use was discontinued at an average of five days following surgery based on established weight bearing criteria. Only one participant required continued crutch use at 15 days. Participants' functional outcome was improved postoperatively, as demonstrated by significant increases in HOS and HHS. At the six month follow up, eight of 11 participants were able to take and complete a full Army Physical Fitness Test. Following completion of the early weight bearing rehabilitation protocol, 81% of participants were able to progress to full weight bearing by four days post-operative, with normalized pain-free gait patterns. Active duty personnel utilizing an early weight bearing protocol following hip arthroscopy demonstrated significant functional improvement at six months. Level 4, Case-series.
Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús
2016-01-01
Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118
1980-08-01
varia- ble is denoted by 7, the total sum of squares of deviations from that mean is defined by n - SSTO - (-Y) (2.6) iul and the regression sum of...squares by SSR - SSTO - SSE (2.7) II 14 A selection criterion is a rule according to which a certain model out of the 2p possible models is labeled "best...dis- cussed next. 1. The R2 Criterion The coefficient of determination is defined by R2 . 1 - SSE/ SSTO . (2.8) It is clear that R is the proportion of
Approach to numerical safety guidelines based on a core melt criterion. [PWR; BWR
DOE Office of Scientific and Technical Information (OSTI.GOV)
Azarm, M.A.; Hall, R.E.
1982-01-01
A plausible approach is proposed for translating a single level criterion to a set of numerical guidelines. The criterion for core melt probability is used to set numerical guidelines for various core melt sequences, systems and component unavailabilities. These guidelines can be used as a means for making decisions regarding the necessity for replacing a component or improving part of a safety system. This approach is applied to estimate a set of numerical guidelines for various sequences of core melts that are analyzed in Reactor Safety Study for the Peach Bottom Nuclear Power Plant.
Heo, Hye Seon; An, MinJi; Lee, Ji Sun; Kim, Hee Kyong; Park, Yeong-Chul
2018-06-01
G-7% NANA is N-acetylneuraminic acid(NANA) containing 7% sialic acid isolated from glycomacropeptide (GMP), a compound of milk. Since NANA is likely to have immunotoxicity, the need to ensure safety for long-term administration has been raised. In this study, a 90-day repeated oral dose toxicity test was performed in rats using G-7% NANA in the dosages of 0, 1250, 2500 and 5000 mg/kg/day.A toxicity determination criterion based on the significant change caused by the administration of the substancewas developed for estimating NOEL, NOAEL and LOAELapplied to this study. When analyzing the immunological markers, no significant changes were observed, even if other significant changes were observed in the high dose group. In accordance with the toxicity determination criterion developed, the NOEL in male and female has been determined as 2500 mg/kg/day, and the NOAEL in females has been determined as 5000 mg/kg/day. The toxicity determination criterion, applied for the first time in the repeated dose toxicity tests, could provide a basis for distinguishing NOEL and NOAEL more clearly; nevertheless, the toxicity determination criterion needs to be supplemented by adding differentiating adverse effects and non-adverse effects based on more experiences of the repeated dose toxicity tests. Copyright © 2018 Elsevier Inc. All rights reserved.
Comparison Of Eigenvector-Based Statistical Pattern Recognition Algorithms For Hybrid Processing
NASA Astrophysics Data System (ADS)
Tian, Q.; Fainman, Y.; Lee, Sing H.
1989-02-01
The pattern recognition algorithms based on eigenvector analysis (group 2) are theoretically and experimentally compared in this part of the paper. Group 2 consists of Foley-Sammon (F-S) transform, Hotelling trace criterion (HTC), Fukunaga-Koontz (F-K) transform, linear discriminant function (LDF) and generalized matched filter (GMF). It is shown that all eigenvector-based algorithms can be represented in a generalized eigenvector form. However, the calculations of the discriminant vectors are different for different algorithms. Summaries on how to calculate the discriminant functions for the F-S, HTC and F-K transforms are provided. Especially for the more practical, underdetermined case, where the number of training images is less than the number of pixels in each image, the calculations usually require the inversion of a large, singular, pixel correlation (or covariance) matrix. We suggest solving this problem by finding its pseudo-inverse, which requires inverting only the smaller, non-singular image correlation (or covariance) matrix plus multiplying several non-singular matrices. We also compare theoretically the effectiveness for classification with the discriminant functions from F-S, HTC and F-K with LDF and GMF, and between the linear-mapping-based algorithms and the eigenvector-based algorithms. Experimentally, we compare the eigenvector-based algorithms using a set of image data bases each image consisting of 64 x 64 pixels.
NASA Astrophysics Data System (ADS)
Sigmund, Peter
The mean equililibrium charge of a penetrating ion can be estimated on the basis of Bohr's velocity criterion or Lamb's energy criterion. Qualitative and quantitative results are derived on the basis of the Thomas-Fermi model of the atom, which is discussed explicitly. This includes a brief introduction to the Thomas-Fermi-Dirac model. Special attention is paid to trial function approaches by Lenz and Jensen as well as Brandt and Kitagawa. The chapter also offers a preliminary discussion of the role of the stopping medium, gas-solid differences, and a survey of data compilations.
Review of the Functions of Archimedes’ Spiral Metallic Nanostructures
Li, Zixiang; Zhang, Jingran; Guo, Kai; Shen, Fei; Zhou, Qingfeng; Zhou, Hongping
2017-01-01
Here, we have reviewed some typical plasmonic structures based on Archimedes’ spiral (AS) architectures, which can produce polarization-sensitive focusing phenomenon and generate plasmonic vortices (PVs) carrying controllable orbital angular momentum (OAM) because of the relation between the incident polarized states and the chiralities of the spiral structures. These features can be used to analyze different circular polarization states, which has been one of the rapidly developing researching topics in nanophotonics in recent years. Many investigations demonstrate that the multifunctional spiral-based plasmonic structures are excellent choices for chiral selection and generating the transmitted field with well-defined OAM. The circular polarization extinction ratio, as an evaluation criterion for the polarization selectivity of a designed structure, could be effectively improved by properly modulating the parameters of spiral structures. Such functional spiral plasmonic nanostructures are promising for applications in analyzing circular polarization light, full Stokes vector polarimetric sensors, near-field imaging, and so on. PMID:29165382
The nonlinear viscoelastic response of resin matrix composite laminates
NASA Technical Reports Server (NTRS)
Hiel, C.; Cardon, A. H.; Brinson, H. F.
1984-01-01
Possible treatments of the nonlinear viscoelastic behavior of materials are reviewed. A thermodynamic based approach, developed by Schapery, is discussed and used to interpret the nonlinear viscoelastic response of a graphite epoxy laminate, T300/934. Test data to verify the analysis for Fiberite 934 neat resin as well as transverse and shear properties of the unidirectional T300/934 composited are presented. Long time creep characteristics as a function of stress level and temperature are generated. Favorable comparisons between the traditional, graphical, and the current analytical approaches are shown. A free energy based rupture criterion is proposed as a way to estimate the life that remains in a structure at any time.
Computation of forces from deformed visco-elastic biological tissues
NASA Astrophysics Data System (ADS)
Muñoz, José J.; Amat, David; Conte, Vito
2018-04-01
We present a least-squares based inverse analysis of visco-elastic biological tissues. The proposed method computes the set of contractile forces (dipoles) at the cell boundaries that induce the observed and quantified deformations. We show that the computation of these forces requires the regularisation of the problem functional for some load configurations that we study here. The functional measures the error of the dynamic problem being discretised in time with a second-order implicit time-stepping and in space with standard finite elements. We analyse the uniqueness of the inverse problem and estimate the regularisation parameter by means of an L-curved criterion. We apply the methodology to a simple toy problem and to an in vivo set of morphogenetic deformations of the Drosophila embryo.
An efficient variable projection formulation for separable nonlinear least squares problems.
Gan, Min; Li, Han-Xiong
2014-05-01
We consider in this paper a class of nonlinear least squares problems in which the model can be represented as a linear combination of nonlinear functions. The variable projection algorithm projects the linear parameters out of the problem, leaving the nonlinear least squares problems involving only the nonlinear parameters. To implement the variable projection algorithm more efficiently, we propose a new variable projection functional based on matrix decomposition. The advantage of the proposed formulation is that the size of the decomposed matrix may be much smaller than those of previous ones. The Levenberg-Marquardt algorithm using finite difference method is then applied to minimize the new criterion. Numerical results show that the proposed approach achieves significant reduction in computing time.
Wavelet decomposition and radial basis function networks for system monitoring
NASA Astrophysics Data System (ADS)
Ikonomopoulos, A.; Endou, A.
1998-10-01
Two approaches are coupled to develop a novel collection of black box models for monitoring operational parameters in a complex system. The idea springs from the intention of obtaining multiple predictions for each system variable and fusing them before they are used to validate the actual measurement. The proposed architecture pairs the analytical abilities of the discrete wavelet decomposition with the computational power of radial basis function networks. Members of a wavelet family are constructed in a systematic way and chosen through a statistical selection criterion that optimizes the structure of the network. Network parameters are further optimized through a quasi-Newton algorithm. The methodology is demonstrated utilizing data obtained during two transients of the Monju fast breeder reactor. The models developed are benchmarked with respect to similar regressors based on Gaussian basis functions.
Fatigue criterion to system design, life and reliability
NASA Technical Reports Server (NTRS)
Zaretsky, E. V.
1985-01-01
A generalized methodology to structural life prediction, design, and reliability based upon a fatigue criterion is advanced. The life prediction methodology is based in part on work of W. Weibull and G. Lundberg and A. Palmgren. The approach incorporates the computed life of elemental stress volumes of a complex machine element to predict system life. The results of coupon fatigue testing can be incorporated into the analysis allowing for life prediction and component or structural renewal rates with reasonable statistical certainty.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
Pasekov, V P
2013-03-01
The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.
Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion
NASA Astrophysics Data System (ADS)
Uenishi, K.; Yamachi, H.
2017-12-01
As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the failure envelope of the Mohr-Coulomb criterion that describes shear-related rupture. The critical value of the volumetric strain for rupture may be controlled by the apparent cohesion and apparent angle of internal friction of the Mohr-Coulomb criterion.
Soil and Water Indicators of the Sustainable Rangelands Roundtable
M.G. Sherm Karl; D.A. Pyke; P.T. Tueller; G.E. Schuman; R.W. Shafer; S.J. Borchard; D.T. Booth; W.G. Ypsilantis; R.H. Jr. Barrett
2006-01-01
The Sustainable Rangelands Roundtable (SRR) has explicitly included conservation and maintenance of soil and water resources as a criterion, a category of conditions or processes that can be assessed nationally to determine if the current level of rangeland management will ensure sustainability. Within the soil/water criterion, 10 indicators, 5 soil-based and 5 water-...
Design of surface modifications for nanoscale sensor applications.
Reimhult, Erik; Höök, Fredrik
2015-01-14
Nanoscale biosensors provide the possibility to miniaturize optic, acoustic and electric sensors to the dimensions of biomolecules. This enables approaching single-molecule detection and new sensing modalities that probe molecular conformation. Nanoscale sensors are predominantly surface-based and label-free to exploit inherent advantages of physical phenomena allowing high sensitivity without distortive labeling. There are three main criteria to be optimized in the design of surface-based and label-free biosensors: (i) the biomolecules of interest must bind with high affinity and selectively to the sensitive area; (ii) the biomolecules must be efficiently transported from the bulk solution to the sensor; and (iii) the transducer concept must be sufficiently sensitive to detect low coverage of captured biomolecules within reasonable time scales. The majority of literature on nanoscale biosensors deals with the third criterion while implicitly assuming that solutions developed for macroscale biosensors to the first two, equally important, criteria are applicable also to nanoscale sensors. We focus on providing an introduction to and perspectives on the advanced concepts for surface functionalization of biosensors with nanosized sensor elements that have been developed over the past decades (criterion (iii)). We review in detail how patterning of molecular films designed to control interactions of biomolecules with nanoscale biosensor surfaces creates new possibilities as well as new challenges.
Kino-oka, Masahiro; Taya, Masahito
2009-10-01
Innovative techniques of cell and tissue processing, based on tissue engineering, have been developed for therapeutic applications. Cell expansion and tissue reconstruction through ex vivo cultures are core processes used to produce engineered tissues with sufficient structural integrity and functionality. In manufacturing, strict management against contamination and human error is compelled due to direct use of un-sterilable products and the laboriousness of culture operations, respectively. Therefore, the development of processing systems for cell and tissue cultures is one of the critical issues for ensuring a stable process and quality of therapeutic products. However, the siting criterion of culture systems to date has not been made clear. This review article classifies some of the known processing systems into 'sealed-chamber' and 'sealed-vessel' culture systems based on the difference in their aseptic spaces, and describes the potential advantages of these systems and current states of culture systems, especially those established by Japanese companies. Moreover, on the basis of the guidelines for isolator systems used in aseptic processing for healthcare products, which are issued by the International Organization for Standardization, the siting criterion of the processing systems for cells and tissue cultures is discussed in perspective of manufacturing therapeutic products in consideration of the regulations according to the Good Manufacturing Practice.
Design of Surface Modifications for Nanoscale Sensor Applications
Reimhult, Erik; Höök, Fredrik
2015-01-01
Nanoscale biosensors provide the possibility to miniaturize optic, acoustic and electric sensors to the dimensions of biomolecules. This enables approaching single-molecule detection and new sensing modalities that probe molecular conformation. Nanoscale sensors are predominantly surface-based and label-free to exploit inherent advantages of physical phenomena allowing high sensitivity without distortive labeling. There are three main criteria to be optimized in the design of surface-based and label-free biosensors: (i) the biomolecules of interest must bind with high affinity and selectively to the sensitive area; (ii) the biomolecules must be efficiently transported from the bulk solution to the sensor; and (iii) the transducer concept must be sufficiently sensitive to detect low coverage of captured biomolecules within reasonable time scales. The majority of literature on nanoscale biosensors deals with the third criterion while implicitly assuming that solutions developed for macroscale biosensors to the first two, equally important, criteria are applicable also to nanoscale sensors. We focus on providing an introduction to and perspectives on the advanced concepts for surface functionalization of biosensors with nanosized sensor elements that have been developed over the past decades (criterion (iii)). We review in detail how patterning of molecular films designed to control interactions of biomolecules with nanoscale biosensor surfaces creates new possibilities as well as new challenges. PMID:25594599
Multiscale Analysis of Head Impacts in Contact Sports
NASA Astrophysics Data System (ADS)
Guttag, Mark; Sett, Subham; Franck, Jennifer; McNamara, Kyle; Bar-Kochba, Eyal; Crisco, Joseph; Blume, Janet; Franck, Christian
2012-02-01
Traumatic brain injury (TBI) is one of the world's major causes of death and disability. To aid companies in designing safer and improved protective gear and to aid the medical community in producing improved quantitative TBI diagnosis and assessment tools, a multiscale finite element model of the human brain, head and neck is being developed. Recorded impact data from football and hockey helmets instrumented with accelerometers are compared to simulated impact data in the laboratory. Using data from these carefully constructed laboratory experiments, we can quantify impact location, magnitude, and linear and angular accelerations of the head. The resultant forces and accelerations are applied to a fully meshed head-form created from MRI data by Simpleware. With appropriate material properties for each region of the head-form, the Abaqus finite element model can determine the stresses, strains, and deformations in the brain. Simultaneously, an in-vitro cellular TBI criterion is being developed to be incorporated into Abaqus models for the brain. The cell-based injury criterion functions the same way that damage criteria for metals and other materials are used to predict failure in structural materials.
NASA Astrophysics Data System (ADS)
Zhang, Jiu-Chang
2018-02-01
Triaxial compression tests are conducted on a quasi-brittle rock, limestone. The analyses show that elastoplastic deformation is coupled with damage. Based on the experimental investigation, a coupled elastoplastic damage model is developed within the framework of irreversible thermodynamics. The coupling effects between the plastic and damage dissipations are described by introducing an isotropic damage variable into the elastic stiffness and yield criterion. The novelty of the model is in the description of the thermodynamic force associated with damage, which is formulated as a state function of both elastic and plastic strain energies. The latter gives a full consideration on the comprehensive effects of plastic strain and stress changing processes in rock material on the development of damage. The damage criterion and potential are constructed to determine the onset and evolution of damage variable. The return mapping algorithms of the coupled model are deduced for three different inelastic corrections. Comparisons between test data and numerical simulations show that the coupled elastoplastic damage model is capable of describing the main mechanical behaviours of the quasi-brittle rock.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
Gu, Danan; Feng, Qiushi; Sautter, Jessica M; Yang, Fang; Ma, Lei; Zhen, Zhihong
2017-03-01
To investigate subtypes of successful aging (SA) based on concordance and discordance between self-rated and researcher-defined measures and their associations with demographic, psychosocial, and life satisfaction factors. We used multinomial logistic regression models to analyze 2013 cross-sectional survey data from 1,962 persons aged 65 and older in Shanghai that measured self-rated successful aging (SSA) with a single global assessment and researcher-defined successful aging (RSA) with a cumulative deficit index reflecting physical, physiological, cognitive, psychological, and social engagement domains. We generated four subtypes based on these two dichotomous variables: nonsuccessful aging (non-SA; meeting neither the criterion of RSA nor the criterion of SSA), RSA-only (meeting the criterion of RSA-only but not the criterion of SSA), SSA-only (meeting the criterion of SSA-only but not the criterion of RSA), and both-successful aging (both-SA; meeting both criteria of RSA and SSA). In the sample, 32% were nonsuccessful agers, 7% RSA-only, 34% SSA-only, and 27% successful agers. Female gender and older age were associated with lower likelihood of RSA-only and both-SA relative to non-SA, but with greater likelihood of SSA-only. Good socioeconomic conditions and social networks were associated with greater likelihood of SSA-only and both-SA relative to non-SA or RSA-only. Satisfaction with life domains was robustly and positively associated with good successful aging outcomes. Researcher-defined successful aging and self-rated successful aging are different measures with distinct social correlates. Subtypes of concordance and discordance provide a more holistic biopsychosocial conceptualization of successful aging. © The Author 2016. Published by Oxford University Press on behalf of The Gerontological Society of America. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
The Relationship of Facilitative Functioning to Effective Peer Supervision
ERIC Educational Resources Information Center
Seligman, Linda
1978-01-01
This study investigates peer supervision. The criterion used was counselor trainees' growth in facilitative functioning. The study sought to ascertain whether the facilitative counselor trainee was also the effective peer supervisor, to provide information on evaluating peer-supervisory experience, and to shed light on the most effective…
DOT National Transportation Integrated Search
1978-08-01
If a functional age index for pilots is to be developed that can be used as a criterion for extending or terminating an aviator's career, means for the assessment of pilot proficiency must be available or devised. There are two major approaches used ...
Poulsen, Ingrid; Kreiner, Svend; Engberg, Aase W
2018-02-13
The Early Functional Abilities scale assesses the restoration of brain function after brain injury, based on 4 dimensions. The primary objective of this study was to evaluate the validity, objectivity, reliability and measurement precision of the Early Functional Abilities scale by Rasch model item analysis. A secondary objective was to examine the relationship between the Early Functional Abilities scale and the Functional Independence Measurement™, in order to establish the criterion validity of the Early Functional Abilities scale and to compare the sensitivity of measurements using the 2 instruments. The Rasch analysis was based on the assessment of 408 adult patients at admission to sub-acute rehabilitation in Copenhagen, Denmark after traumatic brain injury. The Early Functional Abilities scale provides valid and objective measurement of vegetative (autonomic), facio-oral, sensorimotor and communicative/cognitive functions. Removal of one item from the sensorimotor scale confirmed unidimensionality for each of the 4 subscales, but not for the entire scale. The Early Functional Abilities subscales are sensitive to differences between patients in ranges in which the Functional Independence Measurement™ has a floor effect. The Early Functional Abilities scale assesses the early recovery of important aspects of brain function after traumatic brain injury, but is not unidimensional. We recommend removal of the "standing" item and calculation of summary subscales for the separate dimensions.
Lee, C. Ellen; Warden, Stuart J.; Szuck, Beth; Lau, Y.K. James
2015-01-01
Objective The aim of this study was to examine the effects of a 6-week community-based physical activity (PA) intervention on physical function-related risk factors for falls among 56 breast cancer survivors (BCS) who had completed treatments. Design This was a single-group longitudinal study. The multimodal PA intervention included aerobic, strengthening and balance components. Physical function outcomes based on the 4-meter walk, chair stand, one-leg stance, tandem walk, and dynamic muscular endurance tests were assessed at 6-week pre-intervention (T1), baseline (T2), and post-intervention (T3). T1-T2 and T2-T3 were the control and intervention periods, respectively. Results All outcomes, except the tandem walk test, significantly improved after the intervention period (p < 0.05), with no change detected after the control period (p > 0.05). Based on the falls risk criterion in the one-leg stance test, the proportion at risk for falls was significantly lower after the intervention period (p = 0.04), but not after the control period. Conclusions A community-based multimodal PA intervention for BCS may be efficacious in improving physical function-related risk factors for falls, and lowering the proportion of BCS at risk for falls based on specific physical function-related falls criteria. Further larger trials are needed to confirm these preliminary findings. PMID:26829081
A new tracer‐density criterion for heterogeneous porous media
Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar
2001-01-01
Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.
Contribution of criterion A2 to PTSD screening in the presence of traumatic events.
Pereda, Noemí; Forero, Carlos G
2012-10-01
Criterion A2 according to the Diagnostic and Statistical Manual of Mental Disorders (4(th) ed.; DSM-IV; American Psychiatric Association [APA], 1994) for posttraumatic stress disorder (PTSD) aims to assess the individual's subjective appraisal of an event, but it has been claimed that it might not be sufficiently specific for diagnostic purposes. We analyse the contribution of Criterion A2 and DSM-IV criteria to detect PTSD for the most distressing life events experienced by our subjects. Young adults (N = 1,033) reported their most distressing life events, together with PTSD criteria (Criteria A2, B, C, D, E, and F). PTSD prevalence and criterion specificity and agreement with probable diagnoses were estimated. Our results indicate 80.30% of the individuals experienced traumatic events and met one or more PTSD criteria; 13.22% cases received a positive diagnosis of PTSD. Criterion A2 showed poor agreement with the final probable PTSD diagnosis (correlation with PTSD .13, specificity = .10); excluding it from PTSD diagnosis did not the change the estimated disorder prevalence significantly. Based on these findings it appears that Criterion A2 is scarcely specific and provides little information to confirm a probable PTSD case. Copyright © 2012 International Society for Traumatic Stress Studies.
Above-real-time training (ARTT) improves transfer to a simulated flight control task.
Donderi, D C; Niall, Keith K; Fish, Karyn; Goldstein, Benjamin
2012-06-01
The aim of this study was to measure the effects of above-real-time-training (ARTT) speed and screen resolution on a simulated flight control task. ARTT has been shown to improve transfer to the criterion task in some military simulation experiments. We tested training speed and screen resolution in a project, sponsored by Defence Research and Development Canada, to develop components for prototype air mission simulators. For this study, 54 participants used a single-screen PC-based flight simulation program to learn to chase and catch an F-18A fighter jet with another F-18A while controlling the chase aircraft with a throttle and side-stick controller. Screen resolution was varied between participants, and training speed was varied factorially across two sessions within participants. Pretest and posttest trials were at high resolution and criterion (900 knots) speed. Posttest performance was best with high screen resolution training and when one ARTT training session was followed by a session of criterion speed training. ARTT followed by criterion training improves performance on a visual-motor coordination task. We think that ARTT influences known facilitators of transfer, including similarity to the criterion task and contextual interference. Use high-screen resolution, start with ARTT, and finish with criterion speed training when preparing a mission simulation.
NASA Astrophysics Data System (ADS)
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
NASA Astrophysics Data System (ADS)
Harmening, Corinna; Neuner, Hans
2016-09-01
Due to the establishment of terrestrial laser scanner, the analysis strategies in engineering geodesy change from pointwise approaches to areal ones. These areal analysis strategies are commonly built on the modelling of the acquired point clouds. Freeform curves and surfaces like B-spline curves/surfaces are one possible approach to obtain space continuous information. A variety of parameters determines the B-spline's appearance; the B-spline's complexity is mostly determined by the number of control points. Usually, this number of control points is chosen quite arbitrarily by intuitive trial-and-error-procedures. In this paper, the Akaike Information Criterion and the Bayesian Information Criterion are investigated with regard to a justified and reproducible choice of the optimal number of control points of B-spline curves. Additionally, we develop a method which is based on the structural risk minimization of the statistical learning theory. Unlike the Akaike and the Bayesian Information Criteria this method doesn't use the number of parameters as complexity measure of the approximating functions but their Vapnik-Chervonenkis-dimension. Furthermore, it is also valid for non-linear models. Thus, the three methods differ in their target function to be minimized and consequently in their definition of optimality. The present paper will be continued by a second paper dealing with the choice of the optimal number of control points of B-spline surfaces.
High-Temperature Oxidation Behavior of Iridium-Rhenium Alloys
NASA Technical Reports Server (NTRS)
Reed, Brian D.
1995-01-01
The life-limiting mechanism for radiation-cooled rockets made from iridium-coated rhenium (Ir/Re) is the diffusion of Re into the Ir layer and the subsequent oxidation of the resulting Ir-Re alloy from the inner surface. In a previous study, a life model for Ir/Re rockets was developed. It incorporated Ir-Re diffusion and oxidation data to predict chamber lifetimes as a function of temperature and oxygen partial pressure. Oxidation testing at 1540 deg C suggested that a 20-wt percent Re concentration at the inner wall surface should be established as the failure criterion. The present study was performed to better define Ir-oxidation behavior as a function of Re concentration and to supplement the data base for the life model. Samples ranging from pure Ir to Ir-40 wt percent Re (Ir-40Re) were tested at 1500 deg C, in two different oxygen environments. There were indications that the oxidation rate of the Ir-Re alloy increased significantly when it went from a single-phase solid solution to a two-phase mixture, as was suggested in previous work. However, because of testing anomalies in this study, there were not enough dependable oxidation data to definitively raise the Ir/Re rocket failure criterion from 20-wt percent Re to a Re concentration corresponding to entry into the two-phase region.
Jafarzadeh, S Reza; Johnson, Wesley O; Gardner, Ian A
2016-03-15
The area under the receiver operating characteristic (ROC) curve (AUC) is used as a performance metric for quantitative tests. Although multiple biomarkers may be available for diagnostic or screening purposes, diagnostic accuracy is often assessed individually rather than in combination. In this paper, we consider the interesting problem of combining multiple biomarkers for use in a single diagnostic criterion with the goal of improving the diagnostic accuracy above that of an individual biomarker. The diagnostic criterion created from multiple biomarkers is based on the predictive probability of disease, conditional on given multiple biomarker outcomes. If the computed predictive probability exceeds a specified cutoff, the corresponding subject is allocated as 'diseased'. This defines a standard diagnostic criterion that has its own ROC curve, namely, the combined ROC (cROC). The AUC metric for cROC, namely, the combined AUC (cAUC), is used to compare the predictive criterion based on multiple biomarkers to one based on fewer biomarkers. A multivariate random-effects model is proposed for modeling multiple normally distributed dependent scores. Bayesian methods for estimating ROC curves and corresponding (marginal) AUCs are developed when a perfect reference standard is not available. In addition, cAUCs are computed to compare the accuracy of different combinations of biomarkers for diagnosis. The methods are evaluated using simulations and are applied to data for Johne's disease (paratuberculosis) in cattle. Copyright © 2015 John Wiley & Sons, Ltd.
Beyond the swab: ecosystem sampling to understand the persistence of an amphibian pathogen.
Mosher, Brittany A; Huyvaert, Kathryn P; Bailey, Larissa L
2018-06-02
Understanding the ecosystem-level persistence of pathogens is essential for predicting and measuring host-pathogen dynamics. However, this process is often masked, in part due to a reliance on host-based pathogen detection methods. The amphibian pathogens Batrachochytrium dendrobatidis (Bd) and B. salamandrivorans (Bsal) are pathogens of global conservation concern. Despite having free-living life stages, little is known about the distribution and persistence of these pathogens outside of their amphibian hosts. We combine historic amphibian monitoring data with contemporary host- and environment-based pathogen detection data to obtain estimates of Bd occurrence independent of amphibian host distributions. We also evaluate differences in filter- and swab-based detection probability and assess inferential differences arising from using different decision criteria used to classify samples as positive or negative. Water filtration-based detection probabilities were lower than those from swabs but were > 10%, and swab-based detection probabilities varied seasonally, declining in the early fall. The decision criterion used to classify samples as positive or negative was important; using a more liberal criterion yielded higher estimates of Bd occurrence than when a conservative criterion was used. Different covariates were important when using the liberal or conservative criterion in modeling Bd detection. We found evidence of long-term Bd persistence for several years after an amphibian host species of conservation concern, the boreal toad (Anaxyrus boreas boreas), was last detected. Our work provides evidence of long-term Bd persistence in the ecosystem, and underscores the importance of environmental samples for understanding and mitigating disease-related threats to amphibian biodiversity.
Evaluating the role of functional impairment in personality psychopathology.
Boland, Jennifer K; Damnjanovic, Tatjana; Anderson, Jaime L
2018-03-22
DSM-5's Section III Alternative Model for Personality Disorder (AMPD) model states that an individual must show impairment in self and interpersonal functioning for PD diagnosis. The current study investigated dimensional personality trait associations with impairment, including differential patterns of impairment across specific PDs, and whether traits have improved our assessment of functional impairment in PDs. Two-hundred and seventy-seven participants were administered measures of Antisocial PD, Avoidant PD, Borderline PD, Narcissistic PD, Obsessive-Compulsive PD, and Schizotypal PD from the perspectives of Section II (PDQ-4) and Section III (PID-5) PD models, as well as measures of functional impairment in interpersonal and intrapersonal domains. Pearson correlations showed associations between ratings of impairment and most Section II and Section III PDs and trait facets, with the exception of narcissistic PD. Hierarchical regression analyses revealed that Section III PDs added predictive validity beyond Section II PDs in predicting impairment, except narcissistic PD. These findings provide support both for the impairment criterion in the AMPD and for the association between trait-based PDs and impairment, and suggest that this trait-based measurement adds uniquely to the understanding of functional impairment. Copyright © 2018. Published by Elsevier B.V.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
Muthukumar, P; Balasubramaniam, P; Ratnavelu, K
2017-07-26
This paper proposes a generalized robust synchronization method for different dimensional fractional order dynamical systems with mismatched fractional derivatives in the presence of function uncertainty and external disturbance by a designing sliding mode controller. Based on the proposed theory of generalized robust synchronization criterion, a novel audio cryptosystem is proposed for sending or sharing voice messages secretly via insecure channel. Numerical examples are given to verify the potency of the proposed theories. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Jarabo-Amores, María-Pilar; la Mata-Moya, David de; Gil-Pita, Roberto; Rosa-Zurera, Manuel
2013-12-01
The application of supervised learning machines trained to minimize the Cross-Entropy error to radar detection is explored in this article. The detector is implemented with a learning machine that implements a discriminant function, which output is compared to a threshold selected to fix a desired probability of false alarm. The study is based on the calculation of the function the learning machine approximates to during training, and the application of a sufficient condition for a discriminant function to be used to approximate the optimum Neyman-Pearson (NP) detector. In this article, the function a supervised learning machine approximates to after being trained to minimize the Cross-Entropy error is obtained. This discriminant function can be used to implement the NP detector, which maximizes the probability of detection, maintaining the probability of false alarm below or equal to a predefined value. Some experiments about signal detection using neural networks are also presented to test the validity of the study.
ERIC Educational Resources Information Center
Stewart, David G.; Arlt, Virginia K.; Siebert, Erin C.; Chapman, Meredith K.; Hu, Emily M.
2016-01-01
This study aimed to examine (a) the impact of the change in the "Diagnostic and Statistical Manual of Mental Disorders" ("DSM") from a categorical to dimensional classification of substance use diagnoses, (b) the elimination of the legal criterion, and (c) the inclusion of a craving criterion in the "DSM"-5.…
ERIC Educational Resources Information Center
Li, Zhi; Feng, Hui-Hsien; Saricaoglu, Aysel
2017-01-01
This classroom-based study employs a mixed-methods approach to exploring both short-term and long-term effects of Criterion feedback on ESL students' development of grammatical accuracy. The results of multilevel growth modeling indicate that Criterion feedback helps students in both intermediate-high and advanced-low levels reduce errors in eight…
ERIC Educational Resources Information Center
Rhodes, Matthew G.; Jacoby, Larry L.
2007-01-01
The authors examined whether participants can shift their criterion for recognition decisions in response to the probability that an item was previously studied. Participants in 3 experiments were given recognition tests in which the probability that an item was studied was correlated with its location during the test. Results from all 3…
Discriminant locality preserving projections based on L1-norm maximization.
Zhong, Fujin; Zhang, Jiashu; Li, Defang
2014-11-01
Conventional discriminant locality preserving projection (DLPP) is a dimensionality reduction technique based on manifold learning, which has demonstrated good performance in pattern recognition. However, because its objective function is based on the distance criterion using L2-norm, conventional DLPP is not robust to outliers which are present in many applications. This paper proposes an effective and robust DLPP version based on L1-norm maximization, which learns a set of local optimal projection vectors by maximizing the ratio of the L1-norm-based locality preserving between-class dispersion and the L1-norm-based locality preserving within-class dispersion. The proposed method is proven to be feasible and also robust to outliers while overcoming the small sample size problem. The experimental results on artificial datasets, Binary Alphadigits dataset, FERET face dataset and PolyU palmprint dataset have demonstrated the effectiveness of the proposed method.
Evaluation of the Weather Research and Forecasting (WRF) Model over Portugal: Case study
NASA Astrophysics Data System (ADS)
Rodrigues, Mónica; Rocha, Alfredo; Monteiro, Ana
2013-04-01
Established in 1756 the Demarcated Douro Region, became the first viticulturist region to be delimited and regulated under worldwide scale. The region has an area of 250000 hectares, from which 45000 are occupied by continuous vineyards (IVDP, 2010). It stretches along the Douro river valleys and its main streams, from the region of Mesão Frio, about 100 kilometers east from Porto town where this river discharges till attaining the frontier with Spain in the east border. Due to its stretching and extension in the W-E direction accompanying the Douro Valley, it is not strange that the region is not homogeneous having, therefore, three sub-regions: Baixo Corgo, Cima Corgo and Douro Superior. The Baixo Corgo the most western region is the "birthplace" of the viticulturalist region. The main purpose of this work is to evaluate and test the quality of a criterion developed to determine the occurrence of frost. This criterion is to be used latter by numerical weather forecasts (WRF-ARW) and put into practice in 16 meteorological stations in the Demarcated Douro Region. Firstly, the criterion was developed to calculate the occurrence of frost based on the meteorological data observed in those 16 stations. Time series of temperatures and precipitation were used for a period of approximately 20 years. It was verified that the meteorological conditions associated to days with frost (SG) and without frost (CG) are different in each station. Afterwards, the model was validated, especially in what concerns the simulation of the daily minimal temperature. Correcting functions were applied to the data of the model, having considerably diminished the errors of simulation. Then the criterion of frost estimate was applied do the output of the model for a period of 2 frost seasons. The results show that WRF simulates successfully the appearance of frost episodes and so can be used in the frost forecasting.
Validity of two alternative systems for measuring vertical jump height.
Leard, John S; Cirillo, Melissa A; Katsnelson, Eugene; Kimiatek, Deena A; Miller, Tim W; Trebincevic, Kenan; Garbalosa, Juan C
2007-11-01
Vertical jump height is frequently used by coaches, health care professionals, and strength and conditioning professionals to objectively measure function. The purpose of this study is to determine the concurrent validity of the jump and reach method (Vertec) and the contact mat method (Just Jump) in assessing vertical jump height when compared with the criterion reference 3-camera motion analysis system. Thirty-nine college students, 25 females and 14 males between the ages of 18 and 25 (mean age 20.65 years), were instructed to perform the countermovement jump. Reflective markers were placed at the base of the individual's sacrum for the 3-camera motion analysis system to measure vertical jump height. The subject was then instructed to stand on the Just Jump mat beneath the Vertec and perform the jump. Measurements were recorded from each of the 3 systems simultaneously for each jump. The Pearson r statistic between the video and the jump and reach (Vertec) was 0.906. The Pearson r between the video and contact mat (Just Jump) was 0.967. Both correlations were significant at the 0.01 level. Analysis of variance showed a significant difference among the 3 means F(2,235) = 5.51, p < 0.05. The post hoc analysis showed a significant difference between the criterion reference (M = 0.4369 m) and the Vertec (M = 0.3937 m, p = 0.005) but not between the criterion reference and the Just Jump system (M = 0.4420 m, p = 0.972). The Just Jump method of measuring vertical jump height is a valid measure when compared with the 3-camera system. The Vertec was found to have a high correlation with the criterion reference, but the mean differed significantly. This study indicates that a higher degree of confidence is warranted when comparing Just Jump results with a 3-camera system study.
Diederich, A; Schreier, M
2010-09-01
In order to accomplish broad acceptance of priority setting in healthcare, a public debate seems essential, in particular, including the preferences of the general public. In Germany, objections to public involvement are to some extent based on the perception that individuals have an inherent personal bias and cannot represent interests other than their own. The following excerpt from a more comprehensive study reports on the acceptance of personal responsibility as a criterion for prioritizing. A mixed-methods design is used for combining a qualitative interview study and a quantitative survey representative of the German public. Both the interview study and the survey demonstrate that behavior that is harmful to one's health is generally accepted as a criterion for posteriorizing patients, mostly regardless of self interest. In addition, the interview study shows reasons for acceptance or refusal of the self-inflicted behavior criterion.
Testing the criterion for correct convergence in the complex Langevin method
NASA Astrophysics Data System (ADS)
Nagata, Keitaro; Nishimura, Jun; Shimasaki, Shinji
2018-05-01
Recently the complex Langevin method (CLM) has been attracting attention as a solution to the sign problem, which occurs in Monte Carlo calculations when the effective Boltzmann weight is not real positive. An undesirable feature of the method, however, was that it can happen in some parameter regions that the method yields wrong results even if the Langevin process reaches equilibrium without any problem. In our previous work, we proposed a practical criterion for correct convergence based on the probability distribution of the drift term that appears in the complex Langevin equation. Here we demonstrate the usefulness of this criterion in two solvable theories with many dynamical degrees of freedom, i.e., two-dimensional Yang-Mills theory with a complex coupling constant and the chiral Random Matrix Theory for finite density QCD, which were studied by the CLM before. Our criterion can indeed tell the parameter regions in which the CLM gives correct results.
Casero-Alonso, V; López-Fidalgo, J; Torsney, B
2017-01-01
Binary response models are used in many real applications. For these models the Fisher information matrix (FIM) is proportional to the FIM of a weighted simple linear regression model. The same is also true when the weight function has a finite integral. Thus, optimal designs for one binary model are also optimal for the corresponding weighted linear regression model. The main objective of this paper is to provide a tool for the construction of MV-optimal designs, minimizing the maximum of the variances of the estimates, for a general design space. MV-optimality is a potentially difficult criterion because of its nondifferentiability at equal variance designs. A methodology for obtaining MV-optimal designs where the design space is a compact interval [a, b] will be given for several standard weight functions. The methodology will allow us to build a user-friendly computer tool based on Mathematica to compute MV-optimal designs. Some illustrative examples will show a representation of MV-optimal designs in the Euclidean plane, taking a and b as the axes. The applet will be explained using two relevant models. In the first one the case of a weighted linear regression model is considered, where the weight function is directly chosen from a typical family. In the second example a binary response model is assumed, where the probability of the outcome is given by a typical probability distribution. Practitioners can use the provided applet to identify the solution and to know the exact support points and design weights. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
Jung, Sung-Hoon; Kwon, Oh-Yun; Jeon, In-Cheol; Hwang, Ui-Jae; Weon, Jong-Hyuck
2018-01-01
The purposes of this study were to determine the intra-rater test-retest reliability of a smart phone-based measurement tool (SBMT) and a three-dimensional (3D) motion analysis system for measuring the transverse rotation angle of the pelvis during single-leg lifting (SLL) and the criterion validity of the transverse rotation angle of the pelvis measurement using SBMT compared with a 3D motion analysis system (3DMAS). Seventeen healthy volunteers performed SLL with their dominant leg without bending the knee until they reached a target placed 20 cm above the table. This study used a 3DMAS, considered the gold standard, to measure the transverse rotation angle of the pelvis to assess the criterion validity of the SBMT measurement. Intra-rater test-retest reliability was determined using the SBMT and 3DMAS using intra-class correlation coefficient (ICC) [3,1] values. The criterion validity of the SBMT was assessed with ICC [3,1] values. Both the 3DMAS (ICC = 0.77) and SBMT (ICC = 0.83) showed excellent intra-rater test-retest reliability in the measurement of the transverse rotation angle of the pelvis during SLL in a supine position. Moreover, the SBMT showed an excellent correlation with the 3DMAS (ICC = 0.99). Measurement of the transverse rotation angle of the pelvis using the SBMT showed excellent reliability and criterion validity compared with the 3DMAS.
Subsumption principles underlying medical concept systems and their formal reconstruction.
Bernauer, J.
1994-01-01
Conventional medical concept systems represent generic concept relations by hierarchical coding principles. Often, these coding principles constrain the concept system and reduce the potential for automatical derivation of subsumption. Formal reconstruction of medical concept systems is an approach that bases on the conceptual representation of meanings and that allows for the application of formal criteria for subsumption. Those criteria must reflect intuitive principles of subordination which are underlying conventional medical concept systems. Particularly these are: The subordinate concept results (1) from adding a specializing criterion to the superordinate concept, (2) from refining the primary category, or a criterion of the superordinate concept, by a concept that is less general, (3) from adding a partitive criterion to a criterion of the superordinate, (4) from refining a criterion by a concept that is less comprehensive, and finally (5) from coordinating the superordinate concept, or one of its criteria. This paper introduces a formalism called BERNWARD that aims at the formal reconstruction of medical concept systems according to these intuitive principles. The automatical derivation of hierarchical relations is primarily supported by explicit generic and explicit partititive hierarchies of concepts, secondly, by two formal criteria that base on the structure of concept descriptions and explicit hierarchical relations between their elements, namely: formal subsumption and part-sensitive subsumption. Formal subsumption takes only generic relations into account, part-sensitive subsumption additionally regards partive relations between criteria. This approach seems to be flexible enough to cope with unforeseeable effects of partitive criteria on subsumption. PMID:7949907
Abbas, Ismail; Rovira, Joan; Casanovas, Josep
2006-12-01
To develop and validate a model of a clinical trial that evaluates the changes in cholesterol level as a surrogate marker for lipodystrophy in HIV subjects under alternative antiretroviral regimes, i.e., treatment with Protease Inhibitors vs. a combination of nevirapine and other antiretroviral drugs. Five simulation models were developed based on different assumptions, on treatment variability and pattern of cholesterol reduction over time. The last recorded cholesterol level, the difference from the baseline, the average difference from the baseline and level evolution, are the considered endpoints. Specific validation criteria based on a 10% minus or plus standardized distance in means and variances were used to compare the real and the simulated data. The validity criterion was met by all models for considered endpoints. However, only two models met the validity criterion when all endpoints were considered. The model based on the assumption that within-subjects variability of cholesterol levels changes over time is the one that minimizes the validity criterion, standardized distance equal to or less than 1% minus or plus. Simulation is a useful technique for calibration, estimation, and evaluation of models, which allows us to relax the often overly restrictive assumptions regarding parameters required by analytical approaches. The validity criterion can also be used to select the preferred model for design optimization, until additional data are obtained allowing an external validation of the model.
Furuhama, A; Hasunuma, K; Aoki, Y
2015-01-01
In addition to molecular structure profiles, descriptors based on physicochemical properties are useful for explaining the eco-toxicities of chemicals. In a previous study we reported that a criterion based on the difference between the partition coefficient (log POW) and distribution coefficient (log D) values of chemicals enabled us to identify aromatic amines and phenols for which interspecies relationships with strong correlations could be developed for fish-daphnid and algal-daphnid toxicities. The chemicals that met the log D-based criterion were expected to have similar toxicity mechanisms (related to membrane penetration). Here, we investigated the applicability of log D-based criteria to the eco-toxicity of other kinds of chemicals, including aliphatic compounds. At pH 10, use of a log POW - log D > 0 criterion and omission of outliers resulted in the selection of more than 100 chemicals whose acute fish toxicities or algal growth inhibition toxicities were almost equal to their acute daphnid toxicities. The advantage of log D-based criteria is that they allow for simple, rapid screening and prioritizing of chemicals. However, inorganic molecules and chemicals containing certain structural elements cannot be evaluated, because calculated log D values are unavailable.
An investigation of the effects of pitch-roll (de)-coupling on helicopter handling qualities
NASA Technical Reports Server (NTRS)
Ockier, C. J.; Pausder, H. J.; Blanken, C. L.
1995-01-01
An investigation of the effects of pitch-roll coupling on helicopter handling qualities was performed by the US Army and DLR, using a NASA ground-based and a DLR inflight simulator. Over 90 different coupling configurations were evaluated using a roll-axis tracking task. The results show that although the current ADS-33C coupling criterion discriminates against those types of coupling typical of conventionally controlled helicopters, it not always suited for the prediction of handling qualities of helicopters with modern control systems. Based on the observation that high frequency inputs during tracking are used to alleviate coupling, a frequency domain pitch-roll coupling criterion that uses the average coupling ratio between the bandwidth and neutral stability frequency is formulated. This criterion provides a more comprehensive coverage with respect to the different types of coupling and shows excellent consistency.
The Validation of a Case-Based, Cumulative Assessment and Progressions Examination
Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David
2016-01-01
Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435
An Optimal Partial Differential Equations-based Stopping Criterion for Medical Image Denoising.
Khanian, Maryam; Feizi, Awat; Davari, Ali
2014-01-01
Improving the quality of medical images at pre- and post-surgery operations are necessary for beginning and speeding up the recovery process. Partial differential equations-based models have become a powerful and well-known tool in different areas of image processing such as denoising, multiscale image analysis, edge detection and other fields of image processing and computer vision. In this paper, an algorithm for medical image denoising using anisotropic diffusion filter with a convenient stopping criterion is presented. In this regard, the current paper introduces two strategies: utilizing the efficient explicit method due to its advantages with presenting impressive software technique to effectively solve the anisotropic diffusion filter which is mathematically unstable, proposing an automatic stopping criterion, that takes into consideration just input image, as opposed to other stopping criteria, besides the quality of denoised image, easiness and time. Various medical images are examined to confirm the claim.
A Statistics-Based Cracking Criterion of Resin-Bonded Silica Sand for Casting Process Simulation
NASA Astrophysics Data System (ADS)
Wang, Huimin; Lu, Yan; Ripplinger, Keith; Detwiler, Duane; Luo, Alan A.
2017-02-01
Cracking of sand molds/cores can result in many casting defects such as veining. A robust cracking criterion is needed in casting process simulation for predicting/controlling such defects. A cracking probability map, relating to fracture stress and effective volume, was proposed for resin-bonded silica sand based on Weibull statistics. Three-point bending test results of sand samples were used to generate the cracking map and set up a safety line for cracking criterion. Tensile test results confirmed the accuracy of the safety line for cracking prediction. A laboratory casting experiment was designed and carried out to predict cracking of a cup mold during aluminum casting. The stress-strain behavior and the effective volume of the cup molds were calculated using a finite element analysis code ProCAST®. Furthermore, an energy dispersive spectroscopy fractographic examination of the sand samples confirmed the binder cracking in resin-bonded silica sand.
Hot-air forming of Al-Mg-Cr alloy and prediction of failure based on Zener-Holloman parameter
NASA Astrophysics Data System (ADS)
Kim, W. J.; Kim, W. Y.; Kim, H. K.
2010-12-01
The microstructure of an Al-Mg-Cr alloy tube fabricated through indirect extrusion at 673 K showed elongated grains with a mean size of ˜26 μm. The strain rate-stress relationship at high temperatures (753 K to 793 K) revealed that dislocation climb creep was the rate-controlling deformation mechanism. The hot-air forming process was successful at a pressure of 70 bar. The Zener-Hollomon parameter based failure criterion was 3602+, and was used to explain the failure behavior of a deforming body. The forming and fracture behavior of the Al-Mg-Cr alloy tube was analyzed with the aid of finite element (FE) simulation, into which the failure criterion was incorporated. Comparison of the simulation and the experimental results indicated that the proposed fracture criterion was useful in predicting the fracture behavior of aluminum tube deforming by means of gas pressure.
Development of a percutaneous penetration predictive model by SR-FTIR.
Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A
2013-01-30
This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.
Prediction of the Dynamic Yield Strength of Metals Using Two Structural-Temporal Parameters
NASA Astrophysics Data System (ADS)
Selyutina, N. S.; Petrov, Yu. V.
2018-02-01
The behavior of the yield strength of steel and a number of aluminum alloys is investigated in a wide range of strain rates, based on the incubation time criterion of yield and the empirical models of Johnson-Cook and Cowper-Symonds. In this paper, expressions for the parameters of the empirical models are derived through the characteristics of the incubation time criterion; a satisfactory agreement of these data and experimental results is obtained. The parameters of the empirical models can depend on some strain rate. The independence of the characteristics of the incubation time criterion of yield from the loading history and their connection with the structural and temporal features of the plastic deformation process give advantage of the approach based on the concept of incubation time with respect to empirical models and an effective and convenient equation for determining the yield strength in a wider range of strain rates.
Emitter Number Estimation by the General Information Theoretic Criterion from Pulse Trains
2002-12-01
negative log likelihood function plus a penalty function. The general information criteria by Yin and Krishnaiah [11] are different from the regular...548-551, Victoria, BC, Canada, March 1999 DRDC Ottawa TR 2002-156 11 11. L. Zhao, P. P. Krishnaiah and Z. Bai, “On some nonparametric methods for
Experience with k-epsilon turbulence models for heat transfer computations in rotating
NASA Technical Reports Server (NTRS)
Tekriwal, Prabbat
1995-01-01
This viewgraph presentation discusses geometry and flow configuration, effect of y+ on heat transfer computations, standard and extended k-epsilon turbulence model results with wall function, low-Re model results (the Lam-Bremhorst model without wall function), a criterion for flow reversal in a radially rotating square duct, and a summary.
An orbital localization criterion based on the theory of "fuzzy" atoms.
Alcoba, Diego R; Lain, Luis; Torre, Alicia; Bochicchio, Roberto C
2006-04-15
This work proposes a new procedure for localizing molecular and natural orbitals. The localization criterion presented here is based on the partitioning of the overlap matrix into atomic contributions within the theory of "fuzzy" atoms. Our approach has several advantages over other schemes: it is computationally inexpensive, preserves the sigma/pi-separability in planar systems and provides a straightforward interpretation of the resulting orbitals in terms of their localization indices and atomic occupancies. The corresponding algorithm has been implemented and its efficiency tested on selected molecular systems. (c) 2006 Wiley Periodicals, Inc.
Energy Efficiency Building Code for Commercial Buildings in Sri Lanka
DOE Office of Scientific and Technical Information (OSTI.GOV)
Busch, John; Greenberg, Steve; Rubinstein, Francis
2000-09-30
1.1.1 To encourage energy efficient design or retrofit of commercial buildings so that they may be constructed, operated, and maintained in a manner that reduces the use of energy without constraining the building function, the comfort, health, or the productivity of the occupants and with appropriate regard for economic considerations. 1.1.2 To provide criterion and minimum standards for energy efficiency in the design or retrofit of commercial buildings and provide methods for determining compliance with them. 1.1.3 To encourage energy efficient designs that exceed these criterion and minimum standards.
Criterion for Bose-Einstein condensation in a harmonic trap in the case with attractive interactions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gajda, Mariusz
2006-02-15
Using a model many-body wave function I analyze the standard criterion for Bose-Einstein condensation and its relation to coherence properties of the system. I pay special attention to an attractive condensate under such a condition that a characteristic length scale of the spatial extension of its center of mass differs significantly from length scales of relative coordinates. I show that although no interference fringes are produced in the two-slit Young interference experiment performed on this system, fringes of a high visibility can be observed in a conditional simultaneous detection of two particles.
NASA Astrophysics Data System (ADS)
Zanraea, D. D. L.; Needham, D. J.
The depth-averaged hydraulic equations augmented with a suitable bed-load sediment transport function form a closed system which governs the one-dimensional flow in an alluvial river or channel. In this paper, it is shown that this system is hyperbolic and yields three families of shock-wave solutions. These are determined to be temporally stable in restricted regions of the (H, F0)-plane, via the Lax shock inequalities. Further, it is demonstrated that this criterion is equivalent to the energy dissipation criterion developed by Needham and Hey (1991).
A feedback control model for network flow with multiple pure time delays
NASA Technical Reports Server (NTRS)
Press, J.
1972-01-01
A control model describing a network flow hindered by multiple pure time (or transport) delays is formulated. Feedbacks connect each desired output with a single control sector situated at the origin. The dynamic formulation invokes the use of differential difference equations. This causes the characteristic equation of the model to consist of transcendental functions instead of a common algebraic polynomial. A general graphical criterion is developed to evaluate the stability of such a problem. A digital computer simulation confirms the validity of such criterion. An optimal decision making process with multiple delays is presented.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
NASA Astrophysics Data System (ADS)
Zhu, Kaiqun; Song, Yan; Zhang, Sunjie; Zhong, Zhaozhun
2017-07-01
In this paper, a non-fragile observer-based output feedback control problem for the polytopic uncertain system under distributed model predictive control (MPC) approach is discussed. By decomposing the global system into some subsystems, the computation complexity is reduced, so it follows that the online designing time can be saved.Moreover, an observer-based output feedback control algorithm is proposed in the framework of distributed MPC to deal with the difficulties in obtaining the states measurements. In this way, the presented observer-based output-feedback MPC strategy is more flexible and applicable in practice than the traditional state-feedback one. What is more, the non-fragility of the controller has been taken into consideration in favour of increasing the robustness of the polytopic uncertain system. After that, a sufficient stability criterion is presented by using Lyapunov-like functional approach, meanwhile, the corresponding control law and the upper bound of the quadratic cost function are derived by solving an optimisation subject to convex constraints. Finally, some simulation examples are employed to show the effectiveness of the method.
The logic of tax-based financing for health care.
Bodenheimer, T; Sullivan, K
1997-01-01
Employment-based health insurance faces serious problems. For the first time, the number of Americans covered by such health insurance is falling. Employers strongly oppose the employer mandate approach to extending health insurance. Employment-based financing is regressive and complex. Serious debate is needed on an alternative solution to financing health care for all Americans. Taxation represents a clear alternative to employment-based health care financing. The major criterion for choosing a tax is equity, with simplicity a second criterion. An earmarked, progressive individual income tax is a fair and potentially simple tax with which to finance health care. The political feasibility of such a tax is greater than that of employer mandate legislation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaothekar, Sachin, E-mail: sackaothekar@gmail.com
I have studied the effects of finite electron inertia, finite ion Larmor radius (FLR) corrections, and radiative heat-loss function on the thermal instability of an infinite homogeneous, viscous plasma incorporating the effect of thermal conductivity for star formation in interstellar medium (ISM). A general dispersion relation is derived using the normal mode analysis method with the help of relevant linearized perturbation equations of the problem. The wave propagation is discussed for longitudinal and transverse directions to the external magnetic field and the conditions of modified thermal instabilities and stabilities are discussed in different cases. We find that the thermal instabilitymore » criterion is get modified into radiative instability criterion by inclusion of radiative heat-loss functions with thermal conductivity. The viscosity of medium removes the effect of FLR corrections from the condition of radiative instability. Numerical calculation shows stabilizing effect of heat-loss function, viscosity and FLR corrections, and destabilizing effect of finite electron inertia on the thermal instability. Results carried out in this paper shows that stars are formed in interstellar medium mainly due to thermal instability.« less
Multiobjective optimization in structural design with uncertain parameters and stochastic processes
NASA Technical Reports Server (NTRS)
Rao, S. S.
1984-01-01
The application of multiobjective optimization techniques to structural design problems involving uncertain parameters and random processes is studied. The design of a cantilever beam with a tip mass subjected to a stochastic base excitation is considered for illustration. Several of the problem parameters are assumed to be random variables and the structural mass, fatigue damage, and negative of natural frequency of vibration are considered for minimization. The solution of this three-criteria design problem is found by using global criterion, utility function, game theory, goal programming, goal attainment, bounded objective function, and lexicographic methods. It is observed that the game theory approach is superior in finding a better optimum solution, assuming the proper balance of the various objective functions. The procedures used in the present investigation are expected to be useful in the design of general dynamic systems involving uncertain parameters, stochastic process, and multiple objectives.
Nie, Xiaobing; Cao, Jinde
2011-11-01
In this paper, second-order interactions are introduced into competitive neural networks (NNs) and the multistability is discussed for second-order competitive NNs (SOCNNs) with nondecreasing saturated activation functions. Firstly, based on decomposition of state space, Cauchy convergence principle, and inequality technique, some sufficient conditions ensuring the local exponential stability of 2N equilibrium points are derived. Secondly, some conditions are obtained for ascertaining equilibrium points to be locally exponentially stable and to be located in any designated region. Thirdly, the theory is extended to more general saturated activation functions with 2r corner points and a sufficient criterion is given under which the SOCNNs can have (r+1)N locally exponentially stable equilibrium points. Even if there is no second-order interactions, the obtained results are less restrictive than those in some recent works. Finally, three examples with their simulations are presented to verify the theoretical analysis.
NASA Astrophysics Data System (ADS)
Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit
2016-03-01
Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.
Method to estimate center of rigidity using vibration recordings
Safak, Erdal; Çelebi, Mehmet
1990-01-01
A method to estimate the center of rigidity of buildings by using vibration recordings is presented. The method is based on the criterion that the coherence of translational motions with the rotational motion is minimum at the center of rigidity. Since the coherence is a function of frequency, a gross but frequency-independent measure of the coherency is defined as the integral of the coherence function over the frequency. The center of rigidity is determined by minimizing this integral. The formulation is given for two-dimensional motions. Two examples are presented for the method; a rectangular building with ambient-vibration recordings, and a triangular building with earthquake-vibration recordings. Although the examples given are for buildings, the method can be applied to any structure with two-dimensional motions.
ERIC Educational Resources Information Center
Sánchez-Rosas, Javier; Furlan, Luis Alberto
2017-01-01
Based on the control-value theory of achievement emotions and theory of achievement goals, this research provides evidence of convergent, divergent, and criterion validity of the Spanish Cognitive Test Anxiety Scale (S-CTAS). A sample of Argentinean undergraduates responded to several scales administered at three points. At time 1 and 3, the…
ERIC Educational Resources Information Center
Lee, Wan-Fung; Bulcock, Jeffrey Wilson
The purposes of this study are: (1) to demonstrate the superiority of simple ridge regression over ordinary least squares regression through theoretical argument and empirical example; (2) to modify ridge regression through use of the variance normalization criterion; and (3) to demonstrate the superiority of simple ridge regression based on the…
Computerized tomography with total variation and with shearlets
NASA Astrophysics Data System (ADS)
Garduño, Edgar; Herman, Gabor T.
2017-04-01
To reduce the x-ray dose in computerized tomography (CT), many constrained optimization approaches have been proposed aiming at minimizing a regularizing function that measures a lack of consistency with some prior knowledge about the object that is being imaged, subject to a (predetermined) level of consistency with the detected attenuation of x-rays. One commonly investigated regularizing function is total variation (TV), while other publications advocate the use of some type of multiscale geometric transform in the definition of the regularizing function, a particular recent choice for this is the shearlet transform. Proponents of the shearlet transform in the regularizing function claim that the reconstructions so obtained are better than those produced using TV for texture preservation (but may be worse for noise reduction). In this paper we report results related to this claim. In our reported experiments using simulated CT data collection of the head, reconstructions whose shearlet transform has a small ℓ 1-norm are not more efficacious than reconstructions that have a small TV value. Our experiments for making such comparisons use the recently-developed superiorization methodology for both regularizing functions. Superiorization is an automated procedure for turning an iterative algorithm for producing images that satisfy a primary criterion (such as consistency with the observed measurements) into its superiorized version that will produce results that, according to the primary criterion are as good as those produced by the original algorithm, but in addition are superior to them according to a secondary (regularizing) criterion. The method presented for superiorization involving the ℓ 1-norm of the shearlet transform is novel and is quite general: It can be used for any regularizing function that is defined as the ℓ 1-norm of a transform specified by the application of a matrix. Because in the previous literature the split Bregman algorithm is used for similar purposes, a section is included comparing the results of the superiorization algorithm with the split Bregman algorithm.
NASA Astrophysics Data System (ADS)
Song, Yunquan; Lin, Lu; Jian, Ling
2016-07-01
Single-index varying-coefficient model is an important mathematical modeling method to model nonlinear phenomena in science and engineering. In this paper, we develop a variable selection method for high-dimensional single-index varying-coefficient models using a shrinkage idea. The proposed procedure can simultaneously select significant nonparametric components and parametric components. Under defined regularity conditions, with appropriate selection of tuning parameters, the consistency of the variable selection procedure and the oracle property of the estimators are established. Moreover, due to the robustness of the check loss function to outliers in the finite samples, our proposed variable selection method is more robust than the ones based on the least squares criterion. Finally, the method is illustrated with numerical simulations.
Dissipation and traversal time in Josephson junctions
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cacciari, Ilaria; Ranfagni, Anedio; Moretti, Paolo
2010-05-01
The various ways of evaluating dissipative effects in macroscopic quantum tunneling are re-examined. The results obtained by using functional integration, while confirming those of previously given treatments, enable a comparison with available experimental results relative to Josephson junctions. A criterion based on the shortening of the semiclassical traversal time tau of the barrier with regard to dissipation can be established, according to which DELTAtau/tau > or approx. N/Q, where Q is the quality factor of the junction and N is a numerical constant of order unity. The best agreement with the experiments is obtained for N=1.11, as it results frommore » a semiempirical analysis based on an increase in the potential barrier caused by dissipative effects.« less
Improved Hierarchical Optimization-Based Classification of Hyperspectral Images Using Shape Analysis
NASA Technical Reports Server (NTRS)
Tarabalka, Yuliya; Tilton, James C.
2012-01-01
A new spectral-spatial method for classification of hyperspectral images is proposed. The HSegClas method is based on the integration of probabilistic classification and shape analysis within the hierarchical step-wise optimization algorithm. First, probabilistic support vector machines classification is applied. Then, at each iteration two neighboring regions with the smallest Dissimilarity Criterion (DC) are merged, and classification probabilities are recomputed. The important contribution of this work consists in estimating a DC between regions as a function of statistical, classification and geometrical (area and rectangularity) features. Experimental results are presented on a 102-band ROSIS image of the Center of Pavia, Italy. The developed approach yields more accurate classification results when compared to previously proposed methods.
NASA Astrophysics Data System (ADS)
Golinko, I. M.; Kovrigo, Yu. M.; Kubrak, A. I.
2014-03-01
An express method for optimally tuning analog PI and PID controllers is considered. An integral quality criterion with minimizing the control output is proposed for optimizing control systems. The suggested criterion differs from existing ones in that the control output applied to the technological process is taken into account in a correct manner, due to which it becomes possible to maximally reduce the expenditure of material and/or energy resources in performing control of industrial equipment sets. With control organized in such manner, smaller wear and longer service life of control devices are achieved. A unimodal nature of the proposed criterion for optimally tuning a controller is numerically demonstrated using the methods of optimization theory. A functional interrelation between the optimal controller parameters and dynamic properties of a controlled plant is numerically determined for a single-loop control system. The results obtained from simulation of transients in a control system carried out using the proposed and existing functional dependences are compared with each other. The proposed calculation formulas differ from the existing ones by a simple structure and highly accurate search for the optimal controller tuning parameters. The obtained calculation formulas are recommended for being used by specialists in automation for design and optimization of control systems.
NASA Technical Reports Server (NTRS)
Stothers, Richard B.; Chin, Chao-wen
1999-01-01
Interior layers of stars that have been exposed by surface mass loss reveal aspects of their chemical and convective histories that are otherwise inaccessible to observation. It must be significant that the surface hydrogen abundances of luminous blue variables (LBVs) show a remarkable uniformity, specifically X(sub surf) = 0.3 - 0.4, while those of hydrogen-poor Wolf-Rayet (WN) stars fall, almost without exception, below these values, ranging down to X(sub surf) = 0. According to our stellar model calculations, most LBVs are post-red-supergiant objects in a late blue phase of dynamical instability, and most hydrogen-poor WN stars are their immediate descendants. If this is so, stellar models constructed with the Schwarzschild (temperature-gradient) criterion for convection account well for the observed hydrogen abundances, whereas models built with the Ledoux (density-gradient) criterion fail. At the brightest luminosities, the observed hydrogen abundances of LBVs are too large to be explained by any of our highly evolved stellar models, but these LBVs may occupy transient blue loops that exist during an earlier phase of dynamical instability when the star first becomes a yellow supergiant. Independent evidence concerning the criterion for convection, which is based mostly on traditional color distributions of less massive supergiants on the Hertzsprung-Russell diagram, tends to favor the Ledoux criterion. It is quite possible that the true criterion for convection changes over from something like the Ledoux criterion to something like the Schwarzschild criterion as the stellar mass increases.
Economic weights for genetic improvement of lactation persistency and milk yield.
Togashi, K; Lin, C Y
2009-06-01
This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.
NASA Astrophysics Data System (ADS)
Anoukou, K.; Pastor, F.; Dufrenoy, P.; Kondo, D.
2016-06-01
The present two-part study aims at investigating the specific effects of Mohr-Coulomb matrix on the strength of ductile porous materials by using a kinematic limit analysis approach. While in the Part II, static and kinematic bounds are numerically derived and used for validation purpose, the present Part I focuses on the theoretical formulation of a macroscopic strength criterion for porous Mohr-Coulomb materials. To this end, we consider a hollow sphere model with a rigid perfectly plastic Mohr-Coulomb matrix, subjected to axisymmetric uniform strain rate boundary conditions. Taking advantage of an appropriate family of three-parameter trial velocity fields accounting for the specific plastic deformation mechanisms of the Mohr-Coulomb matrix, we then provide a solution of the constrained minimization problem required for the determination of the macroscopic dissipation function. The macroscopic strength criterion is then obtained by means of the Lagrangian method combined with Karush-Kuhn-Tucker conditions. After a careful analysis and discussion of the plastic admissibility condition associated to the Mohr-Coulomb criterion, the above procedure leads to a parametric closed-form expression of the macroscopic strength criterion. The latter explicitly shows a dependence on the three stress invariants. In the special case of a friction angle equal to zero, the established criterion reduced to recently available results for porous Tresca materials. Finally, both effects of matrix friction angle and porosity are briefly illustrated and, for completeness, the macroscopic plastic flow rule and the voids evolution law are fully furnished.
Desktop publishing and validation of custom near visual acuity charts.
Marran, Lynn; Liu, Lei; Lau, George
2008-11-01
Customized visual acuity (VA) assessment is an important part of basic and clinical vision research. Desktop computer based distance VA measurements have been utilized, and shown to be accurate and reliable, but computer based near VA measurements have not been attempted, mainly due to the limited spatial resolution of computer monitors. In this paper, we demonstrate how to use desktop publishing to create printed custom near VA charts. We created a set of six near VA charts in a logarithmic progression, 20/20 through 20/63, with multiple lines of the same acuity level, different letter arrangements in each line and a random noise background. This design allowed repeated measures of subjective accommodative amplitude without the potential artifact of familiarity of the optotypes. The background maintained a constant and spatial frequency rich peripheral stimulus for accommodation across the six different acuity levels. The paper describes in detail how pixel-wise accurate black and white bitmaps of Sloan optotypes were used to create the printed custom VA charts. At all acuity levels, the physical sizes of the printed custom optotypes deviated no more than 0.034 log units from that of the standard, satisfying the 0.05 log unit ISO criterion we used to demonstrate physical equivalence. Also, at all acuity levels, log unit differences in the mean target distance for which reliable recognition of letters first occurred for the printed custom optotypes compared to the standard were found to be below 0.05, satisfying the 0.05 log unit ISO criterion we used to demonstrate functional equivalence. It is possible to use desktop publishing to create custom near VA charts that are physically and functionally equivalent to standard VA charts produced by a commercial printing process.
A Resonance Overlap Criterion for the Onset of Chaos in Systems of Two Eccentric Planets
NASA Astrophysics Data System (ADS)
Hadden, Sam; Lithwick, Yoram
2018-04-01
I will desrcribe a new analytic criterion to predict the onset of chaos in systems consisting of two massive, eccentric planets. Given a planet pair's spacing and masses, the criterion predicts the eccentricities at which the onset of large-scale chaos occurs. The onset of chaos is predicted based on overlap of mean motion resonances as in Wisdom (1980)'s pioneering work. Whereas Wisdom's work was limited to the overlap of first-order resonance and therefore to nearly circular planets, we account for resonances of all orders. This allows us to consider resonance overlap for planets with arbitrary eccentricities (up to orbit-crossing). Our results show excellent agreement with numerical simulations.
Establishment of an equivalence acceptance criterion for accelerated stability studies.
Burdick, Richard K; Sidor, Leslie
2013-01-01
In this article, the use of statistical equivalence testing for providing evidence of process comparability in an accelerated stability study is advocated over the use of a test of differences. The objective of such a study is to demonstrate comparability by showing that the stability profiles under nonrecommended storage conditions of two processes are equivalent. Because it is difficult at accelerated conditions to find a direct link to product specifications, and hence product safety and efficacy, an equivalence acceptance criterion is proposed that is based on the statistical concept of effect size. As with all statistical tests of equivalence, it is important to collect input from appropriate subject-matter experts when defining the acceptance criterion.
A passivity criterion for sampled-data bilateral teleoperation systems.
Jazayeri, Ali; Tavakoli, Mahdi
2013-01-01
A teleoperation system consists of a teleoperator, a human operator, and a remote environment. Conditions involving system and controller parameters that ensure the teleoperator passivity can serve as control design guidelines to attain maximum teleoperation transparency while maintaining system stability. In this paper, sufficient conditions for teleoperator passivity are derived for when position error-based controllers are implemented in discrete-time. This new analysis is necessary because discretization causes energy leaks and does not necessarily preserve the passivity of the system. The proposed criterion for sampled-data teleoperator passivity imposes lower bounds on the teleoperator's robots dampings, an upper bound on the sampling time, and bounds on the control gains. The criterion is verified through simulations and experiments.
Modeling of cw OIL energy performance based on similarity criteria
NASA Astrophysics Data System (ADS)
Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.
2012-01-01
A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
Queri, Silvia; Eggart, Michael; Wendel, Maren; Peter, Ulrike
2017-11-28
Background An instrument should have been developed to measure participation as one possible criterion to evaluate inclusion of elderly people with intellectual disability. The ICF was utilized, because participation is one part of health related functioning, respectively disability. Furthermore ICF includes environmental factors (contextual factors) and attaches them an essentially influence on health related functioning, in particular on participation. Thus ICF Checklist additionally identifies environmental barriers for elimination. Methodology A linking process with VINELAND-II yielded 138 ICF items for the Checklist. The sample consists of 50 persons with a light or moderate intellectual disability. Two-thirds are female and the average age is 68. They were directly asked about their perceived quality of life. Additionally, proxy interviews were carried out with responsible staff members concerning necessary support and behavioral deviances. The ICF Checklist was administered twice, once (t2) the current staff member should rate health related functioning at the given time and in addition, a staff member who knows the person at least 10 years before (t1) should rate the former functioning. Content validity was investigated with factor analysis and criterion validity with correlational analysis related to supports need, behavioral deviances and perceived quality of life. Quantitative analysis was validated by qualitative content analysis of patient documentation. Results Factor analysis shows logical variable clusters across the extracted factors but neither interpretable factors. The Checklist is reliable, valid related to the chosen criterions and shows the expected age-related shifts. Qualitative analysis corresponds with quantitative data. Consequences/Conclusion ICF Checklist is appropriate to manage and evaluate patient-centered care. © Georg Thieme Verlag KG Stuttgart · New York.
Scemama, Anthony; Renon, Nicolas; Rapacioli, Mathias
2014-06-10
We present an algorithm and its parallel implementation for solving a self-consistent problem as encountered in Hartree-Fock or density functional theory. The algorithm takes advantage of the sparsity of matrices through the use of local molecular orbitals. The implementation allows one to exploit efficiently modern symmetric multiprocessing (SMP) computer architectures. As a first application, the algorithm is used within the density-functional-based tight binding method, for which most of the computational time is spent in the linear algebra routines (diagonalization of the Fock/Kohn-Sham matrix). We show that with this algorithm (i) single point calculations on very large systems (millions of atoms) can be performed on large SMP machines, (ii) calculations involving intermediate size systems (1000-100 000 atoms) are also strongly accelerated and can run efficiently on standard servers, and (iii) the error on the total energy due to the use of a cutoff in the molecular orbital coefficients can be controlled such that it remains smaller than the SCF convergence criterion.
Paolucci, S; Traballesi, M; Emberti Gialloreti, L; Pratesi, L; Lubich, S; Salvia, A; Grasso, M G; Morelli, D; Pulcini, M; Troisi, E; Coiro, P; Caltagirone, C
1998-02-01
The aim of this study was to evaluate: 1) whether the reduction in duration of in-patient rehabilitation imposed by the Italian Ministry of Health's circular of 29/6/95 has been accompanied by a decline in the results achieved; and 2) whether the system of basing payments on diagnosis related group (DRG) criteria is capable of correctly evaluating differences in post-stroke clinical pictures. The study involved 461 of 497 patients consecutively admitted between 1991 and 1996 for rehabilitation after a first stroke. The average duration of hospitalisation for the period 1995-1996 was significantly shorter (p<0.001) than that of the previous years; at the same time, there was a significant increase (p<0.05) in the number of poor responders in both neurological and functional (mobility) terms. Furthermore, the early discharge after 60 days of the 1995-1996 patients compromised the stabilisation of recovery and led to a subsequent functional decline. It is therefore hoped that the current regulations will be revised and that payments based on a functional related group (FRG) criterion will be introduced.
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
Hydrogeological controls of groundwater - land surface interactions
NASA Astrophysics Data System (ADS)
Bresciani, Etienne; Batelaan, Okke; Goderniaux, Pascal
2017-04-01
Interaction of groundwater with the land surface impacts a wide range of climatic, hydrologic, ecologic and geomorphologic processes. Many site-specific studies have successfully focused on measuring and modelling groundwater-surface water interaction, but upscaling or estimation at catchment or regional scale appears to be challenging. The factors controlling the interaction at regional scale are still poorly understood. In this contribution, a new 2-D (cross-sectional) analytical groundwater flow solution is used to derive a dimensionless criterion that expresses the conditions under which the groundwater outcrops at the land surface (Bresciani et al., 2016). The criterion gives insights into the functional relationships between geology, topography, climate and the locations of groundwater discharge along river systems. This sheds light on the debate about the topographic control of groundwater flow and groundwater-surface water interaction, as effectively the topography only influences the interaction when the groundwater table reaches the land surface. The criterion provides a practical tool to predict locations of groundwater discharge if a limited number of geomorphological and hydrogeological parameters (recharge, hydraulic conductivity and depth to impervious base) are known, and conversely it can provide regional estimates of the ratio of recharge over hydraulic conductivity if locations of groundwater discharge are known. A case study with known groundwater discharge locations located in South-West Brittany, France shows the feasibility of regional estimates of the ratio of recharge over hydraulic conductivity. Bresciani, E., Goderniaux, P. and Batelaan, O., 2016, Hydrogeological controls of water table-land surface interactions. Geophysical Research Letters 43(18): 9653-9661. http://dx.doi.org/10.1002/2016GL070618
Psychometric evaluation of the Swedish version of Rosenberg's self-esteem scale.
Eklund, Mona; Bäckström, Martin; Hansson, Lars
2018-04-01
The widely used Rosenberg's self-esteem scale (RSES) has not been evaluated for psychometric properties in Sweden. This study aimed at analyzing its factor structure, internal consistency, criterion, convergent and discriminant validity, sensitivity to change, and whether a four-graded Likert-type response scale increased its reliability and validity compared to a yes/no response scale. People with mental illness participating in intervention studies to (1) promote everyday life balance (N = 223) or (2) remedy self-stigma (N = 103) were included. Both samples completed the RSES and questionnaires addressing quality of life and sociodemographic data. Sample 1 also completed instruments chosen to assess convergent and discriminant validity: self-mastery (convergent validity), level of functioning and occupational engagement (discriminant validity). Confirmatory factor analysis (CFA), structural equation modeling, and conventional inferential statistics were used. Based on both samples, the Swedish RSES formed one factor and exhibited high internal consistency (>0.90). The two response scales were equivalent. Criterion validity in relation to quality of life was demonstrated. RSES could distinguish between women and men (women scoring lower) and between diagnostic groups (people with depression scoring lower). Correlations >0.5 with variables chosen to reflect convergent validity and around 0.2 with variables used to address discriminant validity further highlighted the construct validity of RSES. The instrument also showed sensitivity to change. The Swedish RSES exhibited a one-component factor structure and showed good psychometric properties in terms of good internal consistency, criterion, convergent and discriminant validity, and sensitivity to change. The yes/no and the four-graded Likert-type response scales worked equivalently.
A new criterion for acquisition of nicotine self-administration in rats.
Peartree, Natalie A; Sanabria, Federico; Thiel, Kenneth J; Weber, Suzanne M; Cheung, Timothy H C; Neisewander, Janet L
2012-07-01
Acquisition of nicotine self-administration in rodents is relatively difficult to establish and measures of acquisition rate are sometimes confounded by manipulations used to facilitate the process. This study examined acquisition of nicotine self-administration without such manipulations and used mathematical modeling to define the criterion for acquisition. Rats were given 20 daily 2-h sessions occurring 6 days/week in chambers equipped with active and inactive levers. Each active lever press resulted in nicotine reinforcement (0-0.06 mg/kg, IV) and retraction of both levers for a 20-s time out, whereas inactive lever presses had no consequences. Acquisition was defined for individual rats by the higher likelihood of reinforcers obtained across sessions fitting a logistic over a constant function according to the corrected Akaike Information Criterion (AICc). For rats that acquired self-administration, an AICc-based multi-model comparison demonstrated that the asymptote (highest number of reinforcers/session) and mid-point of the acquisition curve (h; the number of sessions necessary to reach half the asymptote) varied by nicotine dose, with both exhibiting a negative relationship (the higher the dose, the lower number of reinforcers and the lower h). The modeling approach used in this study provides a way of defining acquisition of nicotine self-administration that takes advantage of all data from individual subjects and the procedure used is sensitive to dose differences in the absence of manipulations that influence acquisition (e.g., food restriction, prior food reinforcement, conditioned reinforcers). Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
Hierarchical semi-numeric method for pairwise fuzzy group decision making.
Marimin, M; Umano, M; Hatono, I; Tamura, H
2002-01-01
Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.
Zhang, Xingwu; Wang, Chenxi; Gao, Robert X.; Yan, Ruqiang; Chen, Xuefeng; Wang, Shibin
2016-01-01
Milling vibration is one of the most serious factors affecting machining quality and precision. In this paper a novel hybrid error criterion-based frequency-domain LMS active control method is constructed and used for vibration suppression of milling processes by piezoelectric actuators and sensors, in which only one Fast Fourier Transform (FFT) is used and no Inverse Fast Fourier Transform (IFFT) is involved. The correction formulas are derived by a steepest descent procedure and the control parameters are analyzed and optimized. Then, a novel hybrid error criterion is constructed to improve the adaptability, reliability and anti-interference ability of the constructed control algorithm. Finally, based on piezoelectric actuators and acceleration sensors, a simulation of a spindle and a milling process experiment are presented to verify the proposed method. Besides, a protection program is added in the control flow to enhance the reliability of the control method in applications. The simulation and experiment results indicate that the proposed method is an effective and reliable way for on-line vibration suppression, and the machining quality can be obviously improved. PMID:26751448
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Crack propagation in functionally graded strip under thermal shock
NASA Astrophysics Data System (ADS)
Ivanov, I. V.; Sadowski, T.; Pietras, D.
2013-09-01
The thermal shock problem in a strip made of functionally graded composite with an interpenetrating network micro-structure of Al2O3 and Al is analysed numerically. The material considered here could be used in brake disks or cylinder liners. In both applications it is subjected to thermal shock. The description of the position-dependent properties of the considered functionally graded material are based on experimental data. Continuous functions were constructed for the Young's modulus, thermal expansion coefficient, thermal conductivity and thermal diffusivity and implemented as user-defined material properties in user-defined subroutines of the commercial finite element software ABAQUS™. The thermal stress and the residual stress of the manufacturing process distributions inside the strip are considered. The solution of the transient heat conduction problem for thermal shock is used for crack propagation simulation using the XFEM method. The crack length developed during the thermal shock is the criterion for crack resistance of the different graduation profiles as a step towards optimization of the composition gradient with respect to thermal shock sensitivity.
Gaussian processes with optimal kernel construction for neuro-degenerative clinical onset prediction
NASA Astrophysics Data System (ADS)
Canas, Liane S.; Yvernault, Benjamin; Cash, David M.; Molteni, Erika; Veale, Tom; Benzinger, Tammie; Ourselin, Sébastien; Mead, Simon; Modat, Marc
2018-02-01
Gaussian Processes (GP) are a powerful tool to capture the complex time-variations of a dataset. In the context of medical imaging analysis, they allow a robust modelling even in case of highly uncertain or incomplete datasets. Predictions from GP are dependent of the covariance kernel function selected to explain the data variance. To overcome this limitation, we propose a framework to identify the optimal covariance kernel function to model the data.The optimal kernel is defined as a composition of base kernel functions used to identify correlation patterns between data points. Our approach includes a modified version of the Compositional Kernel Learning (CKL) algorithm, in which we score the kernel families using a new energy function that depends both the Bayesian Information Criterion (BIC) and the explained variance score. We applied the proposed framework to model the progression of neurodegenerative diseases over time, in particular the progression of autosomal dominantly-inherited Alzheimer's disease, and use it to predict the time to clinical onset of subjects carrying genetic mutation.
NASA Technical Reports Server (NTRS)
Mavris, Dimitri N.; Bandte, Oliver; Schrage, Daniel P.
1996-01-01
This paper outlines an approach for the determination of economically viable robust design solutions using the High Speed Civil Transport (HSCT) as a case study. Furthermore, the paper states the advantages of a probability based aircraft design over the traditional point design approach. It also proposes a new methodology called Robust Design Simulation (RDS) which treats customer satisfaction as the ultimate design objective. RDS is based on a probabilistic approach to aerospace systems design, which views the chosen objective as a distribution function introduced by so called noise or uncertainty variables. Since the designer has no control over these variables, a variability distribution is defined for each one of them. The cumulative effect of all these distributions causes the overall variability of the objective function. For cases where the selected objective function depends heavily on these noise variables, it may be desirable to obtain a design solution that minimizes this dependence. The paper outlines a step by step approach on how to achieve such a solution for the HSCT case study and introduces an evaluation criterion which guarantees the highest customer satisfaction. This customer satisfaction is expressed by the probability of achieving objective function values less than a desired target value.
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
Zhao, Qiang; Du, Qizhen; Gong, Xufei; ...
2018-04-06
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
Signal-Preserving Erratic Noise Attenuation via Iterative Robust Sparsity-Promoting Filter
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, Qiang; Du, Qizhen; Gong, Xufei
Sparse domain thresholding filters operating in a sparse domain are highly effective in removing Gaussian random noise under Gaussian distribution assumption. Erratic noise, which designates non-Gaussian noise that consists of large isolated events with known or unknown distribution, also needs to be explicitly taken into account. However, conventional sparse domain thresholding filters based on the least-squares (LS) criterion are severely sensitive to data with high-amplitude and non-Gaussian noise, i.e., the erratic noise, which makes the suppression of this type of noise extremely challenging. Here, in this paper, we present a robust sparsity-promoting denoising model, in which the LS criterion ismore » replaced by the Huber criterion to weaken the effects of erratic noise. The random and erratic noise is distinguished by using a data-adaptive parameter in the presented method, where random noise is described by mean square, while the erratic noise is downweighted through a damped weight. Different from conventional sparse domain thresholding filters, definition of the misfit between noisy data and recovered signal via the Huber criterion results in a nonlinear optimization problem. With the help of theoretical pseudoseismic data, an iterative robust sparsity-promoting filter is proposed to transform the nonlinear optimization problem into a linear LS problem through an iterative procedure. The main advantage of this transformation is that the nonlinear denoising filter can be solved by conventional LS solvers. Lastly, tests with several data sets demonstrate that the proposed denoising filter can successfully attenuate the erratic noise without damaging useful signal when compared with conventional denoising approaches based on the LS criterion.« less
NASA Astrophysics Data System (ADS)
Wu, Li; Adoko, Amoussou Coffi; Li, Bo
2018-04-01
In tunneling, determining quantitatively the rock mass strength parameters of the Hoek-Brown (HB) failure criterion is useful since it can improve the reliability of the design of tunnel support systems. In this study, a quantitative method is proposed to determine the rock mass quality parameters of the HB failure criterion, namely the Geological Strength Index (GSI) and the disturbance factor ( D) based on the structure of drilling core and weathering condition of rock mass combined with acoustic wave test to calculate the strength of rock mass. The Rock Mass Structure Index and the Rock Mass Weathering Index are used to quantify the GSI while the longitudinal wave velocity ( V p) is employed to derive the value of D. The DK383+338 tunnel face of Yaojia tunnel of Shanghai-Kunming passenger dedicated line served as illustration of how the methodology is implemented. The values of the GSI and D are obtained using the HB criterion and then using the proposed method. The measured in situ stress is used to evaluate their accuracy. To this end, the major and minor principal stresses are calculated based on the GSI and D given by HB criterion and the proposed method. The results indicated that both methods were close to the field observation which suggests that the proposed method can be used for determining quantitatively the rock quality parameters, as well. However, these results remain valid only for rock mass quality and rock type similar to those of the DK383+338 tunnel face of Yaojia tunnel.
Density functional theory and chromium: Insights from the dimers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Würdemann, Rolf; Kristoffersen, Henrik H.; Moseler, Michael
2015-03-28
The binding in small Cr clusters is re-investigated, where the correct description of the dimer in three charge states is used as criterion to assign the most suitable density functional theory approximation. The difficulty in chromium arises from the subtle interplay between energy gain from hybridization and energetic cost due to exchange between s and d based molecular orbitals. Variations in published bond lengths and binding energies are shown to arise from insufficient numerical representation of electron density and Kohn-Sham wave-functions. The best functional performance is found for gradient corrected (GGA) functionals and meta-GGAs, where we find severe differences betweenmore » functionals from the same family due to the importance of exchange. Only the “best fit” from Bayesian error estimation is able to predict the correct energetics for all three charge states unambiguously. With this knowledge, we predict small bond-lengths to be exclusively present in Cr{sub 2} and Cr{sub 2}{sup −}. Already for the dimer cation, solely long bond-lengths appear, similar to what is found in the trimer and in chromium bulk.« less
ERIC Educational Resources Information Center
Deng, Weiling; Monfils, Lora
2017-01-01
Using simulated data, this study examined the impact of different levels of stringency of the valid case inclusion criterion on item response theory (IRT)-based true score equating over 5 years in the context of K-12 assessment when growth in student achievement is expected. Findings indicate that the use of the most stringent inclusion criterion…
Non-Markovianity of Gaussian Channels.
Torre, G; Roga, W; Illuminati, F
2015-08-14
We introduce a necessary and sufficient criterion for the non-Markovianity of Gaussian quantum dynamical maps based on the violation of divisibility. The criterion is derived by defining a general vectorial representation of the covariance matrix which is then exploited to determine the condition for the complete positivity of partial maps associated with arbitrary time intervals. Such construction does not rely on the Choi-Jamiolkowski representation and does not require optimization over states.
NASA Astrophysics Data System (ADS)
Gromov, V. A.; Sharygin, G. S.; Mironov, M. V.
2012-08-01
An interval method of radar signal detection and selection based on non-energetic polarization parameter - the ellipticity angle - is suggested. The examined method is optimal by the Neumann-Pearson criterion. The probability of correct detection for a preset probability of false alarm is calculated for different signal/noise ratios. Recommendations for optimization of the given method are provided.
ERIC Educational Resources Information Center
Daly-Smith, Andy J. W.; McKenna, Jim; Radley, Duncan; Long, Jonathan
2011-01-01
Objective: To investigate the value of additional days of active commuting for meeting a criterion of 300+ minutes of moderate-to-vigorous physical activity (MVPA; 60+ mins/day x 5) during the school week. Methods: Based on seven-day diaries supported by teachers, binary logistic regression analyses were used to predict achievement of MVPA…
Taki, Yasuyuki; Hashizume, Hiroshi; Thyreau, Benjamin; Sassa, Yuko; Takeuchi, Hikaru; Wu, Kai; Kotozaki, Yuka; Nouchi, Rui; Asano, Michiko; Asano, Kohei; Fukuda, Hiroshi; Kawashima, Ryuta
2013-08-01
We examined linear and curvilinear correlations of gray matter volume and density in cortical and subcortical gray matter with age using magnetic resonance images (MRI) in a large number of healthy children. We applied voxel-based morphometry (VBM) and region-of-interest (ROI) analyses with the Akaike information criterion (AIC), which was used to determine the best-fit model by selecting which predictor terms should be included. We collected data on brain structural MRI in 291 healthy children aged 5-18 years. Structural MRI data were segmented and normalized using a custom template by applying the diffeomorphic anatomical registration using exponentiated lie algebra (DARTEL) procedure. Next, we analyzed the correlations of gray matter volume and density with age in VBM with AIC by estimating linear, quadratic, and cubic polynomial functions. Several regions such as the prefrontal cortex, the precentral gyrus, and cerebellum showed significant linear or curvilinear correlations between gray matter volume and age on an increasing trajectory, and between gray matter density and age on a decreasing trajectory in VBM and ROI analyses with AIC. Because the trajectory of gray matter volume and density with age suggests the progress of brain maturation, our results may contribute to clarifying brain maturation in healthy children from the viewpoint of brain structure. Copyright © 2012 Wiley Periodicals, Inc.
A multidimensional anisotropic strength criterion based on Kelvin modes
NASA Astrophysics Data System (ADS)
Arramon, Yves Pierre
A new theory for the prediction of multiaxial strength of anisotropic elastic materials was proposed by Biegler and Mehrabadi (1993). This theory is based on the premise that the total elastic strain energy of an anisotropic material subjected to multiaxial stress can be decomposed into dilatational and deviatoric modes. A multidimensional strength criterion may thus be formulated by postulating that the failure would occur when the energy stored in one of these modes has reached a critical value. However, the logic employed by these authors to formulate a failure criterion based on this theory could not be extended to multiaxial stress. In this thesis, an alternate criterion is presented which redresses the biaxial restriction by reformulating the surfaces of constant modal energy as surfaces of constant eigenstress magnitude. The resulting failure envelope, in a multidimensional stress space, is piecewise smooth. Each facet of the envelope is expected to represent the locus of failure data by a particular Kelvin mode. It is further shown that the Kelvin mode theory alone provides an incomplete description of the failure of some materials, but that this weakness can be addressed by the introduction of a set of complementary modes. A revised theory which combines both Kelvin and complementary modes is thus proposed and applied seven example materials: an isotropic concrete, tetragonal paperboard, two orthotropic softwoods, two orthotropic hardwoods and an orthotropic cortical bone. The resulting failure envelopes for these examples were plotted and, with the exception of concrete, shown to produce intuitively correct failure predictions.
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback–Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes. PMID:29872389
Multimodal Hierarchical Dirichlet Process-Based Active Perception by a Robot.
Taniguchi, Tadahiro; Yoshino, Ryo; Takano, Toshiaki
2018-01-01
In this paper, we propose an active perception method for recognizing object categories based on the multimodal hierarchical Dirichlet process (MHDP). The MHDP enables a robot to form object categories using multimodal information, e.g., visual, auditory, and haptic information, which can be observed by performing actions on an object. However, performing many actions on a target object requires a long time. In a real-time scenario, i.e., when the time is limited, the robot has to determine the set of actions that is most effective for recognizing a target object. We propose an active perception for MHDP method that uses the information gain (IG) maximization criterion and lazy greedy algorithm. We show that the IG maximization criterion is optimal in the sense that the criterion is equivalent to a minimization of the expected Kullback-Leibler divergence between a final recognition state and the recognition state after the next set of actions. However, a straightforward calculation of IG is practically impossible. Therefore, we derive a Monte Carlo approximation method for IG by making use of a property of the MHDP. We also show that the IG has submodular and non-decreasing properties as a set function because of the structure of the graphical model of the MHDP. Therefore, the IG maximization problem is reduced to a submodular maximization problem. This means that greedy and lazy greedy algorithms are effective and have a theoretical justification for their performance. We conducted an experiment using an upper-torso humanoid robot and a second one using synthetic data. The experimental results show that the method enables the robot to select a set of actions that allow it to recognize target objects quickly and accurately. The numerical experiment using the synthetic data shows that the proposed method can work appropriately even when the number of actions is large and a set of target objects involves objects categorized into multiple classes. The results support our theoretical outcomes.
Lee, Sunpyo; Choi, Kee Don; Han, Minkyu; Na, Hee Kyong; Ahn, Ji Yong; Jung, Kee Wook; Lee, Jeong Hoon; Kim, Do Hoon; Song, Ho June; Lee, Gin Hyug; Yook, Jeong-Hwan; Kim, Byung Sik; Jung, Hwoon-Yong
2018-05-01
Endoscopic submucosal dissection (ESD) for early gastric cancer (EGC) meeting the expanded indication is considered investigational. We aimed to compare long-term outcomes of ESD and surgery for EGC in the expanded indication based on each criterion. This study included 1823 consecutive EGC patients meeting expanded indication conditions and treated at a tertiary referral center: 916 and 907 patients underwent surgery or ESD, respectively. The expanded indication included four discrete criteria: (I) intramucosal differentiated tumor, without ulcers, size >2 cm; (II) intramucosal differentiated tumor, with ulcers, size ≤3 cm; (III) intramucosal undifferentiated tumor, without ulcers, size ≤2 cm; and (IV) submucosal invasion <500 μm (sm1), differentiated tumor, size ≤3 cm. We selected 522 patients in each group by propensity score matching and retrospectively evaluated each group. The primary outcome was overall survival (OS); the secondary outcomes were disease-specific survival (DSS), recurrence-free survival (RFS), and treatment-related complications. In all patients and subgroups meeting each criterion, OS and DSS were not significantly different between groups (OS and DSS, all patients: p = 0.354 and p = 0.930; criteria I: p = 0.558 and p = 0.688; criterion II: p = 1.000 and p = 1.000; criterion III: p = 0.750 and p = 0.799; and criterion IV: p = 0.599 and p = 0.871). RFS, in all patients and criterion I, was significantly shorter in the ESD group than in the surgery group (p < 0.001 and p < 0.003, respectively). The surgery group showed higher rates of late and severe treatment-related complications than the ESD group. ESD may be an alternative treatment option to surgery for EGCs meeting expanded indications, including undifferentiated-type tumors.
Electrocardiographic screening for emphysema: the frontal plane P axis.
Baljepally, R; Spodick, D H
1999-03-01
Because the most characteristic and sensitive electrocardiographic (ECG) correlate of pulmonary emphysema in adults is verticalization of the frontal plane P-wave vector (P axis), we investigated its strength as a lone criterion to screen for obstructive pulmonary disease (OPD) in an adult hospital population. In all, 954 consecutive unselected ECGs were required to yield 100 with P axis > or = +70 degrees (unequivocally negative P in a VL during sinus rhythm) and pulmonary function tests. and 100 with P axis < or = +50 degrees (unequivocally positive P-aVL). Obstructive pulmonary disease by both pulmonary function test and clinical criteria was present in 89 of 100 patients with vertical P axes and 4 of 100 patients without OPD. The high sensitivity (89% for this series) and high specificity (96%) makes vertical P axis a useful screening criterion. Its at-a-glance simplicity makes it "user-friendly."
Hydrodynamics with strength: scaling-invariant solutions for elastic-plastic cavity expansion models
NASA Astrophysics Data System (ADS)
Albright, Jason; Ramsey, Scott; Baty, Roy
2017-11-01
Spherical cavity expansion (SCE) models are used to describe idealized detonation and high-velocity impact in a variety of materials. The common theme in SCE models is the presence of a pressure-driven cavity or void within a domain comprised of plastic and elastic response sub-regions. In past work, the yield criterion characterizing material strength in the plastic sub-region is usually taken for granted and assumed to take a known functional form restrictive to certain classes of materials, e.g. ductile metals or brittle geologic materials. Our objective is to systematically determine a general functional form for the yield criterion under the additional requirement that the SCE admits a similarity solution. Solutions determined under this additional requirement have immediate implications toward development of new compressible flow algorithm verification test problems. However, more importantly, these results also provide novel insight into modeling the yield criteria from the perspective of hydrodynamic scaling.
AHP for Risk Management Based on Expected Utility Theory
NASA Astrophysics Data System (ADS)
Azuma, Rumiko; Miyagi, Hayao
This paper presents a model of decision-making considering the risk assessment. The conventional evaluation in AHP is considered to be a kind of utility. When dealing with the risk, however, it is necessary to consider the probability of damage. In order to take risk into decision-making problem, we construct AHP based on expected utility. The risk is considered as a related element of criterion rather than criterion itself. The expected utility is integrated, considering that satisfaction is positive utility and damage by risk is negative utility. Then, evaluation in AHP is executed using the expected utility.
Structure-based conformational preferences of amino acids
Koehl, Patrice; Levitt, Michael
1999-01-01
Proteins can be very tolerant to amino acid substitution, even within their core. Understanding the factors responsible for this behavior is of critical importance for protein engineering and design. Mutations in proteins have been quantified in terms of the changes in stability they induce. For example, guest residues in specific secondary structures have been used as probes of conformational preferences of amino acids, yielding propensity scales. Predicting these amino acid propensities would be a good test of any new potential energy functions used to mimic protein stability. We have recently developed a protein design procedure that optimizes whole sequences for a given target conformation based on the knowledge of the template backbone and on a semiempirical potential energy function. This energy function is purely physical, including steric interactions based on a Lennard-Jones potential, electrostatics based on a Coulomb potential, and hydrophobicity in the form of an environment free energy based on accessible surface area and interatomic contact areas. Sequences designed by this procedure for 10 different proteins were analyzed to extract conformational preferences for amino acids. The resulting structure-based propensity scales show significant agreements with experimental propensity scale values, both for α-helices and β-sheets. These results indicate that amino acid conformational preferences are a natural consequence of the potential energy we use. This confirms the accuracy of our potential and indicates that such preferences should not be added as a design criterion. PMID:10535955
Progress in multirate digital control system design
NASA Technical Reports Server (NTRS)
Berg, Martin C.; Mason, Gregory S.
1991-01-01
A new methodology for multirate sampled-data control design based on a new generalized control law structure, two new parameter-optimization-based control law synthesis methods, and a new singular-value-based robustness analysis method are described. The control law structure can represent multirate sampled-data control laws of arbitrary structure and dynamic order, with arbitrarily prescribed sampling rates for all sensors and update rates for all processor states and actuators. The two control law synthesis methods employ numerical optimization to determine values for the control law parameters. The robustness analysis method is based on the multivariable Nyquist criterion applied to the loop transfer function for the sampling period equal to the period of repetition of the system's complete sampling/update schedule. The complete methodology is demonstrated by application to the design of a combination yaw damper and modal suppression system for a commercial aircraft.
Perales, José C; Catena, Andrés; Shanks, David R; González, José A
2005-09-01
A number of studies using trial-by-trial learning tasks have shown that judgments of covariation between a cue c and an outcome o deviate from normative metrics. Parameters based on trial-by-trial predictions were estimated from signal detection theory (SDT) in a standard causal learning task. Results showed that manipulations of P(c) when contingency (deltaP) was held constant did not affect participants' ability to predict the appearance of the outcome (d') but had a significant effect on response criterion (c) and numerical causal judgments. The association between criterion c and judgment was further demonstrated in 2 experiments in which the criterion was directly manipulated by linking payoffs to the predictive responses made by learners. In all cases, the more liberal the criterion c was, the higher judgments were. The results imply that the mechanisms underlying the elaboration of judgments and those involved in the elaboration of predictive responses are partially dissociable.
NASA Technical Reports Server (NTRS)
Parada, N. D. J. (Principal Investigator); Dutra, L. V.; Mascarenhas, N. D. A.; Mitsuo, Fernando Augusta, II
1984-01-01
A study area near Ribeirao Preto in Sao Paulo state was selected, with predominance in sugar cane. Eight features were extracted from the 4 original bands of LANDSAT image, using low-pass and high-pass filtering to obtain spatial features. There were 5 training sites in order to acquire the necessary parameters. Two groups of four channels were selected from 12 channels using JM-distance and entropy criterions. The number of selected channels was defined by physical restrictions of the image analyzer and computacional costs. The evaluation was performed by extracting the confusion matrix for training and tests areas, with a maximum likelihood classifier, and by defining performance indexes based on those matrixes for each group of channels. Results show that in spatial features and supervised classification, the entropy criterion is better in the sense that allows a more accurate and generalized definition of class signature. On the other hand, JM-distance criterion strongly reduces the misclassification within training areas.
NASA Astrophysics Data System (ADS)
Li, Yongming; Li, Fan; Wang, Pin; Zhu, Xueru; Liu, Shujun; Qiu, Mingguo; Zhang, Jingna; Zeng, Xiaoping
2016-10-01
Traditional age estimation methods are based on the same idea that uses the real age as the training label. However, these methods ignore that there is a deviation between the real age and the brain age due to accelerated brain aging. This paper considers this deviation and searches for it by maximizing the separability distance value rather than by minimizing the difference between the estimated brain age and the real age. Firstly, set the search range of the deviation as the deviation candidates according to prior knowledge. Secondly, use the support vector regression (SVR) as the age estimation model to minimize the difference between the estimated age and the real age plus deviation rather than the real age itself. Thirdly, design the fitness function based on the separability distance criterion. Fourthly, conduct age estimation on the validation dataset using the trained age estimation model, put the estimated age into the fitness function, and obtain the fitness value of the deviation candidate. Fifthly, repeat the iteration until all the deviation candidates are involved and get the optimal deviation with maximum fitness values. The real age plus the optimal deviation is taken as the brain pathological age. The experimental results showed that the separability was apparently improved. For normal control-Alzheimer’s disease (NC-AD), normal control-mild cognition impairment (NC-MCI), and MCI-AD, the average improvements were 0.178 (35.11%), 0.033 (14.47%), and 0.017 (39.53%), respectively. For NC-MCI-AD, the average improvement was 0.2287 (64.22%). The estimated brain pathological age could be not only more helpful to the classification of AD but also more precisely reflect accelerated brain aging. In conclusion, this paper offers a new method for brain age estimation that can distinguish different states of AD and can better reflect the extent of accelerated aging.
An Irreversible Constitutive Law for Modeling the Delamination Process using Interface Elements
NASA Technical Reports Server (NTRS)
Goyal, Vinay K.; Johnson, Eric R.; Davila, Carlos G.; Jaunky, Navin; Ambur, Damodar (Technical Monitor)
2002-01-01
An irreversible constitutive law is postulated for the formulation of interface elements to predict initiation and progression of delamination in composite structures. An exponential function is used for the constitutive law such that it satisfies a multi-axial stress criterion for the onset of delamination, and satisfies a mixed mode fracture criterion for the progression of delamination. A damage parameter is included to prevent the restoration of the previous cohesive state between the interfacial surfaces. To demonstrate the irreversibility capability of the constitutive law, steady-state crack growth is simulated for quasi-static loading-unloading cycle of various fracture test specimens.
An Irreversible Constitutive Law for Modeling the Delamination Process Using Interface Elements
NASA Technical Reports Server (NTRS)
Goyal, Vinay K.; Johnson, Eric R.; Davila, Carlos G.; Jaunky, Navin; Bushnell, Dennis M. (Technical Monitor)
2002-01-01
An irreversible constitutive law is postulated for the formulation of interface elements to predict initiation and progression of delamination in composite structures. An exponential function is used for the constitutive law such that it satisfies a multi-axial stress criterion for the onset of delamination, and satisfies a mixed mode fracture criterion for the progression of delamination. A damage parameter is included to prevent the restoration of the previous cohesive state between the interfacial surfaces. To demonstrate the irreversibility capability of the constitutive law, steady-state crack growth is simulated for quasi-static loading-unloading cycle of various fracture test specimens.
Transverse liquid fuel jet breakup, burning, and ignition
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Hsi-shang
1990-01-01
An analytical/numerical study of the breakup, burning, and ignition of liquid fuels injected transversely into a hot air stream is conducted. The non-reacting liquid jet breakup location is determined by the local sonic point criterion first proposed by Schetz, et al. (1980). Two models, one employing analysis of an elliptical jet cross-section and the other employing a two-dimensional blunt body to represent the transverse jet, have been used for sonic point calculations. An auxiliary criterion based on surface tension stability is used as a separate means of determining the breakup location. For the reacting liquid jet problem, a diffusion flamemore » supported by a one-step chemical reaction within the gaseous boundary layer is solved along the ellipse surface in subsonic crossflow. Typical flame structures and concentration profiles have been calculated for various locations along the jet cross-section as a function of upstream Mach numbers. The integrated reaction rate along the jet cross-section is used to predict ignition position, which is found to be situated near the stagnation point. While a multi-step reaction is needed to represent the ignition process more accurately, the present calculation does yield reasonable predictions concerning ignition along a curved surface.« less
[Medical expert assessment, objectivity and justice in disability pension cases. ].
Solli, Hans Magnus
2003-08-14
The formal principle of justice is often interpreted as the requirement of objectivity when a person's situation is to be evaluated in the light of social justice. The aim of this article is to analyse whether or not the formal principle of justice is fulfilled by the ontological and the epistemological concept of objectivity when disability claims are evaluated medically in relation to the Norwegian legislation on disability benefits. material is legal and medical texts about medical disability evaluation. The method is text analysis. The main result is that the concept of ontological objectivity functions as the criterion of objectivity when the causal relations between sickness, impairment and disability are explained. This criterion is, however, problematic because it is based on the assumption that there is a linear causal model of these relations, which precludes the explanation of many cases of disability. The ontological concept of objectivity is not a necessary condition for impartiality and formal justice in relation to the causal relation between sickness and disability. In some situations this concept is a sufficient condition. The epistemological concept of objectivity is a sufficient condition, but it is not a necessary condition. Some cases must be reviewed on a discretionary basis.
INFO-RNA--a fast approach to inverse RNA folding.
Busch, Anke; Backofen, Rolf
2006-08-01
The structure of RNA molecules is often crucial for their function. Therefore, secondary structure prediction has gained much interest. Here, we consider the inverse RNA folding problem, which means designing RNA sequences that fold into a given structure. We introduce a new algorithm for the inverse folding problem (INFO-RNA) that consists of two parts; a dynamic programming method for good initial sequences and a following improved stochastic local search that uses an effective neighbor selection method. During the initialization, we design a sequence that among all sequences adopts the given structure with the lowest possible energy. For the selection of neighbors during the search, we use a kind of look-ahead of one selection step applying an additional energy-based criterion. Afterwards, the pre-ordered neighbors are tested using the actual optimization criterion of minimizing the structure distance between the target structure and the mfe structure of the considered neighbor. We compared our algorithm to RNAinverse and RNA-SSD for artificial and biological test sets. Using INFO-RNA, we performed better than RNAinverse and in most cases, we gained better results than RNA-SSD, the probably best inverse RNA folding tool on the market. www.bioinf.uni-freiburg.de?Subpages/software.html.
Enhancing phonon flow through one-dimensional interfaces by impedance matching
NASA Astrophysics Data System (ADS)
Polanco, Carlos A.; Ghosh, Avik W.
2014-08-01
We extend concepts from microwave engineering to thermal interfaces and explore the principles of impedance matching in 1D. The extension is based on the generalization of acoustic impedance to nonlinear dispersions using the contact broadening matrix Γ(ω), extracted from the phonon self energy. For a single junction, we find that for coherent and incoherent phonons, the optimal thermal conductance occurs when the matching Γ(ω) equals the Geometric Mean of the contact broadenings. This criterion favors the transmission of both low and high frequency phonons by requiring that (1) the low frequency acoustic impedance of the junction matches that of the two contacts by minimizing the sum of interfacial resistances and (2) the cut-off frequency is near the minimum of the two contacts, thereby reducing the spillage of the states into the tunneling regime. For an ultimately scaled single atom/spring junction, the matching criterion transforms to the arithmetic mean for mass and the harmonic mean for spring constant. The matching can be further improved using a composite graded junction with an exponential varying broadening that functions like a broadband antireflection coating. There is, however, a trade off as the increased length of the interface brings in additional intrinsic sources of scattering.
Transverse liquid fuel jet breakup, burning, and ignition. M.S. Thesis
NASA Technical Reports Server (NTRS)
Li, Hsi-Shang
1990-01-01
An analytical study of the breakup, burning, and ignition of liquid fuels injected transversely into a hot air stream is conducted. The non-reacting liquid jet breakup location is determined by the local sonic point criterion. Two models, one employing analysis of an elliptical jet cross-section and the other employing a two-dimensional blunt body to represent the transverse jet, were used for sonic point calculations. An auxiliary criterion based on surface tension stability is used as a separate means of determining the breakup location. For the reacting liquid jet problem, a diffusion flame supported by a one-step chemical reaction within the gaseous boundary layer is solved along the ellipse surface in subsonic cross flow. Typical flame structures and concentration profiles were calculated for various locations along the jet cross-section as a function of upstream Mach numbers. The integration reaction rate along the jet cross-section is used to predict ignition position, which is found to be situated near the stagnation point. While a multi-step reaction is needed to represent the ignition process more accurately, the present calculation does yield reasonable predictions concerning ignition along a curved surface.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Klawitter, A.L.; Hoak, T.E.; Decker, A.D.
In 1993, the San Juan Basin accounted for approximately 605 Bcf of the 740 Bcf of all coalbed gas produced in the United States. The San Juan {open_quotes}cavitation fairway{close_quotes} in which production occurs in open-hole cavity completions, is responsible for over 60% of all U.S. coalbed methane production. Perhaps most striking is the fact that over 17,000 wells had penetrated the Fruitland formation in the San Juan Basin prior to recognition of the coalbed methan potential. To understand the dynamic cavity fairway reservoir in the San Juan Basin, an exploration rationale for coalbed methan was developed that permits a sequentialmore » reduction in total basin exploration area based on four primary exploration criteria. One of the most significant criterion is the existence of thick, thermally mature, friable coals. A second criterion is the existence of fully gas-charged coals. Evaluation of this criterion requires reservoir geochemical data to delineate zones of meteoric influx where breaching has occurred. A third criterion is the presence of adequate reservoir permeability. Natural fracturing in coals is due to cleating and tectonic processes. Because of the general relationship between coal cleating and coal rank, coal cleating intensity can be estimated by analysis of regional coal rank maps. The final criterion is determining whether natural fractures are open or closed. To make this determination, remote sensing imagery interpretation is supported by ancillary data compiled from regional tectonic studies. Application of these four criteria to the San Juan Basin in a heuristic, stepwise process resulted in an overall 94% reduction in total basin exploration area. Application of the first criterion reduced the total basin exploration area by 80%. Application of the second criterion further winnows this area by an addition 9%. Application of the third criterion reduces the exploration area to 6% of the total original exploration area.« less
Evaluation of peak-picking algorithms for protein mass spectrometry.
Bauer, Chris; Cramer, Rainer; Schuchhardt, Johannes
2011-01-01
Peak picking is an early key step in MS data analysis. We compare three commonly used approaches to peak picking and discuss their merits by means of statistical analysis. Methods investigated encompass signal-to-noise ratio, continuous wavelet transform, and a correlation-based approach using a Gaussian template. Functionality of the three methods is illustrated and discussed in a practical context using a mass spectral data set created with MALDI-TOF technology. Sensitivity and specificity are investigated using a manually defined reference set of peaks. As an additional criterion, the robustness of the three methods is assessed by a perturbation analysis and illustrated using ROC curves.
Quadrotor trajectory tracking using PID cascade control
NASA Astrophysics Data System (ADS)
Idres, M.; Mustapha, O.; Okasha, M.
2017-12-01
Quadrotors have been applied to collect information for traffic, weather monitoring, surveillance and aerial photography. In order to accomplish their mission, quadrotors have to follow specific trajectories. This paper presents proportional-integral-derivative (PID) cascade control of a quadrotor for path tracking problem when velocity and acceleration are small. It is based on near hover controller for small attitude angles. The integral of time-weighted absolute error (ITAE) criterion is used to determine the PID gains as a function of quadrotor modeling parameters. The controller is evaluated in three-dimensional environment in Simulink. Overall, the tracking performance is found to be excellent for small velocity condition.
Gomez, Rapson; Hafetz, Nina; Gomez, Rashika Miranjani
2013-08-01
This study examined the prevalence rate of Oppositional Defiant Disorder (ODD) in Malaysian primary school children. In all 934 Malaysian parents and teachers completed ratings of their children using a scale comprising DSM-IV-TR ODD symptoms. Results showed rates of 3.10%, 3.85%, 7.49% and 0.64% for parent, teacher, parent or teacher ("or-rule"), and parent and teacher ("and-rule") ratings, respectively. When the functional impairment criterion was not considered, the rate reported by parents was higher at 13.28%. The theoretical, diagnostic and cultural implications of the findings are discussed. Copyright © 2013 Elsevier B.V. All rights reserved.
[Registration and 3D rendering of serial tissue section images].
Liu, Zhexing; Jiang, Guiping; Dong, Wu; Zhang, Yu; Xie, Xiaomian; Hao, Liwei; Wang, Zhiyuan; Li, Shuxiang
2002-12-01
It is an important morphological research method to reconstruct the 3D imaging from serial section tissue images. Registration of serial images is a key step to 3D reconstruction. Firstly, an introduction to the segmentation-counting registration algorithm is presented, which is based on the joint histogram. After thresholding of the two images to be registered, the criterion function is defined as counting in a specific region of the joint histogram, which greatly speeds up the alignment process. Then, the method is used to conduct the serial tissue image matching task, and lies a solid foundation for 3D rendering. Finally, preliminary surface rendering results are presented.
Dominant partition method. [based on a wave function formalism
NASA Technical Reports Server (NTRS)
Dixon, R. M.; Redish, E. F.
1979-01-01
By use of the L'Huillier, Redish, and Tandy (LRT) wave function formalism, a partially connected method, the dominant partition method (DPM) is developed for obtaining few body reductions of the many body problem in the LRT and Bencze, Redish, and Sloan (BRS) formalisms. The DPM maps the many body problem to a fewer body one by using the criterion that the truncated formalism must be such that consistency with the full Schroedinger equation is preserved. The DPM is based on a class of new forms for the irreducible cluster potential, which is introduced in the LRT formalism. Connectivity is maintained with respect to all partitions containing a given partition, which is referred to as the dominant partition. Degrees of freedom corresponding to the breakup of one or more of the clusters of the dominant partition are treated in a disconnected manner. This approach for simplifying the complicated BRS equations is appropriate for physical problems where a few body reaction mechanism prevails.
Physical employment standards for U.K. fire and rescue service personnel.
Blacker, S D; Rayson, M P; Wilkinson, D M; Carter, J M; Nevill, A M; Richmond, V L
2016-01-01
Evidence-based physical employment standards are vital for recruiting, training and maintaining the operational effectiveness of personnel in physically demanding occupations. (i) Develop criterion tests for in-service physical assessment, which simulate the role-related physical demands of UK fire and rescue service (UK FRS) personnel. (ii) Develop practical physical selection tests for FRS applicants. (iii) Evaluate the validity of the selection tests to predict criterion test performance. Stage 1: we conducted a physical demands analysis involving seven workshops and an expert panel to document the key physical tasks required of UK FRS personnel and to develop 'criterion' and 'selection' tests. Stage 2: we measured the performance of 137 trainee and 50 trained UK FRS personnel on selection, criterion and 'field' measures of aerobic power, strength and body size. Statistical models were developed to predict criterion test performance. Stage 3: matter experts derived minimum performance standards. We developed single person simulations of the key physical tasks required of UK FRS personnel as criterion and selection tests (rural fire, domestic fire, ladder lift, ladder extension, ladder climb, pump assembly, enclosed space search). Selection tests were marginally stronger predictors of criterion test performance (r = 0.88-0.94, 95% Limits of Agreement [LoA] 7.6-14.0%) than field test scores (r = 0.84-0.94, 95% LoA 8.0-19.8%) and offered greater face and content validity and more practical implementation. This study outlines the development of role-related, gender-free physical employment tests for the UK FRS, which conform to equal opportunities law. © The Author 2015. Published by Oxford University Press on behalf of the Society of Occupational Medicine. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
NASA Astrophysics Data System (ADS)
Lehmann, Thomas M.
2002-05-01
Reliable evaluation of medical image processing is of major importance for routine applications. Nonetheless, evaluation is often omitted or methodically defective when novel approaches or algorithms are introduced. Adopted from medical diagnosis, we define the following criteria to classify reference standards: 1. Reliance, if the generation or capturing of test images for evaluation follows an exactly determined and reproducible protocol. 2. Equivalence, if the image material or relationships considered within an algorithmic reference standard equal real-life data with respect to structure, noise, or other parameters of importance. 3. Independence, if any reference standard relies on a different procedure than that to be evaluated, or on other images or image modalities than that used routinely. This criterion bans the simultaneous use of one image for both, training and test phase. 4. Relevance, if the algorithm to be evaluated is self-reproducible. If random parameters or optimization strategies are applied, reliability of the algorithm must be shown before the reference standard is applied for evaluation. 5. Significance, if the number of reference standard images that are used for evaluation is sufficient large to enable statistically founded analysis. We demand that a true gold standard must satisfy the Criteria 1 to 3. Any standard only satisfying two criteria, i.e., Criterion 1 and Criterion 2 or Criterion 1 and Criterion 3, is referred to as silver standard. Other standards are termed to be from plastic. Before exhaustive evaluation based on gold or silver standards is performed, its relevance must be shown (Criterion 4) and sufficient tests must be carried out to found statistical analysis (Criterion 5). In this paper, examples are given for each class of reference standards.
Variation, Repetition, And Choice
Abreu-Rodrigues, Josele; Lattal, Kennon A; dos Santos, Cristiano V; Matos, Ricardo A
2005-01-01
Experiment 1 investigated the controlling properties of variability contingencies on choice between repeated and variable responding. Pigeons were exposed to concurrent-chains schedules with two alternatives. In the REPEAT alternative, reinforcers in the terminal link depended on a single sequence of four responses. In the VARY alternative, a response sequence in the terminal link was reinforced only if it differed from the n previous sequences (lag criterion). The REPEAT contingency generated low, constant levels of sequence variation whereas the VARY contingency produced levels of sequence variation that increased with the lag criterion. Preference for the REPEAT alternative tended to increase directly with the degree of variation required for reinforcement. Experiment 2 examined the potential confounding effects in Experiment 1 of immediacy of reinforcement by yoking the interreinforcer intervals in the REPEAT alternative to those in the VARY alternative. Again, preference for REPEAT was a function of the lag criterion. Choice between varying and repeating behavior is discussed with respect to obtained behavioral variability, probability of reinforcement, delay of reinforcement, and switching within a sequence. PMID:15828592
Acquisition of control skill with delayed and compensated displays.
Ricard, G L
1995-09-01
The difficulty of mastering a two-axis, compensatory, manual control task was manipulated by introducing transport delays into the feedback loop of the controlled element. Realistic aircraft dynamics were used. Subjects' display was a simulation of an "inside-out" artificial horizon instrument perturbed by atmospheric turbulence. The task was to maintain straight and level flight, and delays tested were representative of those found in current training simulators. Delay compensations in the form of first-order lead and first-order lead/lag transfer functions, along with an uncompensated condition, were factorially combined with added delays. Subjects were required to meet a relatively strict criterion for performance. Control activity showed no differences during criterion performance, but the trials needed to achieve the criterion were linearly related to the magnitude of the delay and the compensation condition. These data were collected in the context of aircraft attitude control, but the results can be applied to the simulation of other vehicles, to remote manipulation, and to maneuvering in graphical environments.
Stability of the iterative solutions of integral equations as one phase freezing criterion.
Fantoni, R; Pastore, G
2003-10-01
A recently proposed connection between the threshold for the stability of the iterative solution of integral equations for the pair correlation functions of a classical fluid and the structural instability of the corresponding real fluid is carefully analyzed. Direct calculation of the Lyapunov exponent of the standard iterative solution of hypernetted chain and Percus-Yevick integral equations for the one-dimensional (1D) hard rods fluid shows the same behavior observed in 3D systems. Since no phase transition is allowed in such 1D system, our analysis shows that the proposed one phase criterion, at least in this case, fails. We argue that the observed proximity between the numerical and the structural instability in 3D originates from the enhanced structure present in the fluid but, in view of the arbitrary dependence on the iteration scheme, it seems uneasy to relate the numerical stability analysis to a robust one-phase criterion for predicting a thermodynamic phase transition.
The psychometric properties of the Portuguese version of the Personality Inventory for DSM-5.
Pires, Rute; Sousa Ferreira, Ana; Guedes, David
2017-10-01
The DSM-5 Section III proposes a hybrid dimensional-categorical model of conceptualizing personality and its disorders that includes assessment of impairments in personality functioning (criterion A) and maladaptive personality traits (criterion B). The Personality Inventory for the DSM-5 is a new dimensional tool, composed of 220 items organized into 25 facets that delineate five higher order domains of clinically relevant personality differences, and was developed to operationalize the DSM-5 model of pathological personality traits. The current studies address the internal consistency (study 1), the test-retest reliability (study 2) and the criterion validity (studies 3 and 4) of the Portuguese version of the PID-5 in samples of native speaking psychology students. Results indicated good internal consistency reliabilities and good temporal stability reliabilities for the majority of the PID-5 traits. The correlational pattern of the PID-5 traits with two measures of personality was in accordance with theoretical expectations and showed its concurrent validity. © 2017 Scandinavian Psychological Associations and John Wiley & Sons Ltd.
Audio visual speech source separation via improved context dependent association model
NASA Astrophysics Data System (ADS)
Kazemi, Alireza; Boostani, Reza; Sobhanmanesh, Fariborz
2014-12-01
In this paper, we exploit the non-linear relation between a speech source and its associated lip video as a source of extra information to propose an improved audio-visual speech source separation (AVSS) algorithm. The audio-visual association is modeled using a neural associator which estimates the visual lip parameters from a temporal context of acoustic observation frames. We define an objective function based on mean square error (MSE) measure between estimated and target visual parameters. This function is minimized for estimation of the de-mixing vector/filters to separate the relevant source from linear instantaneous or time-domain convolutive mixtures. We have also proposed a hybrid criterion which uses AV coherency together with kurtosis as a non-Gaussianity measure. Experimental results are presented and compared in terms of visually relevant speech detection accuracy and output signal-to-interference ratio (SIR) of source separation. The suggested audio-visual model significantly improves relevant speech classification accuracy compared to existing GMM-based model and the proposed AVSS algorithm improves the speech separation quality compared to reference ICA- and AVSS-based methods.
Element analysis: a wavelet-based method for analysing time-localized events in noisy time series.
Lilly, Jonathan M
2017-04-01
A method is derived for the quantitative analysis of signals that are composed of superpositions of isolated, time-localized 'events'. Here, these events are taken to be well represented as rescaled and phase-rotated versions of generalized Morse wavelets, a broad family of continuous analytic functions. Analysing a signal composed of replicates of such a function using another Morse wavelet allows one to directly estimate the properties of events from the values of the wavelet transform at its own maxima. The distribution of events in general power-law noise is determined in order to establish significance based on an expected false detection rate. Finally, an expression for an event's 'region of influence' within the wavelet transform permits the formation of a criterion for rejecting spurious maxima due to numerical artefacts or other unsuitable events. Signals can then be reconstructed based on a small number of isolated points on the time/scale plane. This method, termed element analysis , is applied to the identification of long-lived eddy structures in ocean currents as observed by along-track measurements of sea surface elevation from satellite altimetry.
Ultra-stiff metallic glasses through bond energy density design.
Schnabel, Volker; Köhler, Mathias; Music, Denis; Bednarcik, Jozef; Clegg, William J; Raabe, Dierk; Schneider, Jochen M
2017-07-05
The elastic properties of crystalline metals scale with their valence electron density. Similar observations have been made for metallic glasses. However, for metallic glasses where covalent bonding predominates, such as metalloid metallic glasses, this relationship appears to break down. At present, the reasons for this are not understood. Using high energy x-ray diffraction analysis of melt spun and thin film metallic glasses combined with density functional theory based molecular dynamics simulations, we show that the physical origin of the ultrahigh stiffness in both metalloid and non-metalloid metallic glasses is best understood in terms of the bond energy density. Using the bond energy density as novel materials design criterion for ultra-stiff metallic glasses, we are able to predict a Co 33.0 Ta 3.5 B 63.5 short range ordered material by density functional theory based molecular dynamics simulations with a high bond energy density of 0.94 eV Å -3 and a bulk modulus of 263 GPa, which is 17% greater than the stiffest Co-B based metallic glasses reported in literature.
Rikli, Roberta E; Jones, C Jessie
2013-04-01
To develop and validate criterion-referenced fitness standards for older adults that predict the level of capacity needed for maintaining physical independence into later life. The proposed standards were developed for use with a previously validated test battery for older adults-the Senior Fitness Test (Rikli, R. E., & Jones, C. J. (2001). Development and validation of a functional fitness test for community--residing older adults. Journal of Aging and Physical Activity, 6, 127-159; Rikli, R. E., & Jones, C. J. (1999a). Senior fitness test manual. Champaign, IL: Human Kinetics.). A criterion measure to assess physical independence was identified. Next, scores from a subset of 2,140 "moderate-functioning" older adults from a larger cross-sectional database, together with findings from longitudinal research on physical capacity and aging, were used as the basis for proposing fitness standards (performance cut points) associated with having the ability to function independently. Validity and reliability analyses were conducted to test the standards for their accuracy and consistency as predictors of physical independence. Performance standards are presented for men and women ages 60-94 indicating the level of fitness associated with remaining physically independent until late in life. Reliability and validity indicators for the standards ranged between .79 and .97. The proposed standards provide easy-to-use, previously unavailable methods for evaluating physical capacity in older adults relative to that associated with physical independence. Most importantly, the standards can be used in planning interventions that target specific areas of weakness, thus reducing risk for premature loss of mobility and independence.
Examining the Latent Structure of the Delis-Kaplan Executive Function System.
Karr, Justin E; Hofer, Scott M; Iverson, Grant L; Garcia-Barrera, Mauricio A
2018-05-04
The current study aimed to determine whether the Delis-Kaplan Executive Function System (D-KEFS) taps into three executive function factors (inhibition, shifting, fluency) and to assess the relationship between these factors and tests of executive-related constructs less often measured in latent variable research: reasoning, abstraction, and problem solving. Participants included 425 adults from the D-KEFS standardization sample (20-49 years old; 50.1% female; 70.1% White). Eight alternative measurement models were compared based on model fit, with test scores assigned a priori to three factors: inhibition (Color-Word Interference, Tower), shifting (Trail Making, Sorting, Design Fluency), and fluency (Verbal/Design Fluency). The Twenty Questions, Word Context, and Proverb Tests were predicted in separate structural models. The three-factor model fit the data well (CFI = 0.938; RMSEA = 0.047), although a two-factor model, with shifting and fluency merged, fit similarly well (CFI = 0.929; RMSEA = 0.048). A bifactor model fit best (CFI = 0.977; RMSEA = 0.032) and explained the most variance in shifting indicators, but rarely converged among 5,000 bootstrapped samples. When the three first-order factors simultaneously predicted the criterion variables, only shifting was uniquely predictive (p < .05; R2 = 0.246-0.408). The bifactor significantly predicted all three criterion variables (p < .001; R2 = 0.141-242). Results supported a three-factor D-KEFS model (i.e., inhibition, shifting, and fluency), although shifting and fluency were highly related (r = 0.696). The bifactor showed superior fit, but converged less often than other models. Shifting best predicted tests of reasoning, abstraction, and problem solving. These findings support the validity of D-KEFS scores for measuring executive-related constructs and provide a framework through which clinicians can interpret D-KEFS results.