ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
NASA Astrophysics Data System (ADS)
Alahmadi, F.; Rahman, N. A.; Abdulrazzak, M.
2014-09-01
Rainfall frequency analysis is an essential tool for the design of water related infrastructure. It can be used to predict future flood magnitudes for a given magnitude and frequency of extreme rainfall events. This study analyses the application of rainfall partial duration series (PDS) in the vast growing urban Madinah city located in the western part of Saudi Arabia. Different statistical distributions were applied (i.e. Normal, Log Normal, Extreme Value type I, Generalized Extreme Value, Pearson Type III, Log Pearson Type III) and their distribution parameters were estimated using L-moments methods. Also, different selection criteria models are applied, e.g. Akaike Information Criterion (AIC), Corrected Akaike Information Criterion (AICc), Bayesian Information Criterion (BIC) and Anderson-Darling Criterion (ADC). The analysis indicated the advantage of Generalized Extreme Value as the best fit statistical distribution for Madinah partial duration daily rainfall series. The outcome of such an evaluation can contribute toward better design criteria for flood management, especially flood protection measures.
Saha, Tulshi D; Compton, Wilson M; Chou, S Patricia; Smith, Sharon; Ruan, W June; Huang, Boji; Pickering, Roger P; Grant, Bridget F
2012-04-01
Prior research has demonstrated the dimensionality of alcohol, nicotine and cannabis use disorders criteria. The purpose of this study was to examine the unidimensionality of DSM-IV cocaine, amphetamine and prescription drug abuse and dependence criteria and to determine the impact of elimination of the legal problems criterion on the information value of the aggregate criteria. Factor analyses and Item Response Theory (IRT) analyses were used to explore the unidimensionality and psychometric properties of the illicit drug use criteria using a large representative sample of the U.S. population. All illicit drug abuse and dependence criteria formed unidimensional latent traits. For amphetamines, cocaine, sedatives, tranquilizers and opioids, IRT models fit better for models without legal problems criterion than models with legal problems criterion and there were no differences in the information value of the IRT models with and without the legal problems criterion, supporting the elimination of that criterion. Consistent with findings for alcohol, nicotine and cannabis, amphetamine, cocaine, sedative, tranquilizer and opioid abuse and dependence criteria reflect underlying unitary dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the American Psychiatric Association's DSM-5 Substance and Related Disorders Workgroup. Published by Elsevier Ireland Ltd.
Kerridge, Bradley T.; Saha, Tulshi D.; Smith, Sharon; Chou, Patricia S.; Pickering, Roger P.; Huang, Boji; Ruan, June W.; Pulay, Attila J.
2012-01-01
Background Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders - Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM- IV currently lacking empirical justification. Methods Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Results Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Conclusion Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. PMID:21621334
Kerridge, Bradley T; Saha, Tulshi D; Smith, Sharon; Chou, Patricia S; Pickering, Roger P; Huang, Boji; Ruan, June W; Pulay, Attila J
2011-09-01
Prior research has demonstrated the dimensionality of Diagnostic and Statistical Manual of Mental Disorders-Fourth Edition (DSM-IV) alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria. The purpose of this study was to examine the dimensionality of hallucinogen and inhalant/solvent abuse and dependence criteria. In addition, we assessed the impact of elimination of the legal problems abuse criterion on the information value of the aggregate abuse and dependence criteria, another proposed change for DSM-IV currently lacking empirical justification. Factor analyses and item response theory (IRT) analyses were used to explore the unidimisionality and psychometric properties of hallucinogen and inhalant/solvent abuse and dependence criteria using a large representative sample of the United States (U.S.) general population. Hallucinogen and inhalant/solvent abuse and dependence criteria formed unidimensional latent traits. For both substances, IRT models without the legal problems abuse criterion demonstrated better fit than the corresponding model with the legal problem abuse criterion. Further, there were no differences in the information value of the IRT models with and without the legal problems abuse criterion, supporting the elimination of that criterion. No bias in the new diagnoses was observed by sex, age and race-ethnicity. Consistent with findings for alcohol, nicotine, cannabis, cocaine and amphetamine abuse and dependence criteria, hallucinogen and inhalant/solvent criteria reflect underlying dimensions of severity. The legal problems criterion associated with each of these substance use disorders can be eliminated with no loss in informational value and an advantage of parsimony. Taken together, these findings support the changes to substance use disorder diagnoses recommended by the DSM-V Substance and Related Disorders Workgroup, that is, combining DSM-IV abuse and dependence criteria and eliminating the legal problems abuse criterion. Published by Elsevier Ltd.
A stopping criterion for the iterative solution of partial differential equations
NASA Astrophysics Data System (ADS)
Rao, Kaustubh; Malan, Paul; Perot, J. Blair
2018-01-01
A stopping criterion for iterative solution methods is presented that accurately estimates the solution error using low computational overhead. The proposed criterion uses information from prior solution changes to estimate the error. When the solution changes are noisy or stagnating it reverts to a less accurate but more robust, low-cost singular value estimate to approximate the error given the residual. This estimator can also be applied to iterative linear matrix solvers such as Krylov subspace or multigrid methods. Examples of the stopping criterion's ability to accurately estimate the non-linear and linear solution error are provided for a number of different test cases in incompressible fluid dynamics.
[Information value of "additional tasks" method to evaluate pilot's work load].
Gorbunov, V V
2005-01-01
"Additional task" method was used to evaluate pilot's work load in prolonged flight. Calculated through durations of latent periods of motor responses, quantitative criterion of work load is more informative for objective evaluation of pilot's involvement in his piloting functions rather than of other registered parameters.
Automatic discovery of optimal classes
NASA Technical Reports Server (NTRS)
Cheeseman, Peter; Stutz, John; Freeman, Don; Self, Matthew
1986-01-01
A criterion, based on Bayes' theorem, is described that defines the optimal set of classes (a classification) for a given set of examples. This criterion is transformed into an equivalent minimum message length criterion with an intuitive information interpretation. This criterion does not require that the number of classes be specified in advance, this is determined by the data. The minimum message length criterion includes the message length required to describe the classes, so there is a built in bias against adding new classes unless they lead to a reduction in the message length required to describe the data. Unfortunately, the search space of possible classifications is too large to search exhaustively, so heuristic search methods, such as simulated annealing, are applied. Tutored learning and probabilistic prediction in particular cases are an important indirect result of optimal class discovery. Extensions to the basic class induction program include the ability to combine category and real value data, hierarchical classes, independent classifications and deciding for each class which attributes are relevant.
Analysis of the observed and intrinsic durations of Swift/BAT gamma-ray bursts
NASA Astrophysics Data System (ADS)
Tarnopolski, Mariusz
2016-07-01
The duration distribution of 947 GRBs observed by Swift/BAT, as well as its subsample of 347 events with measured redshift, allowing to examine the durations in both the observer and rest frames, are examined. Using a maximum log-likelihood method, mixtures of two and three standard Gaussians are fitted to each sample, and the adequate model is chosen based on the value of the difference in the log-likelihoods, Akaike information criterion and Bayesian information criterion. It is found that a two-Gaussian is a better description than a three-Gaussian, and that the presumed intermediate-duration class is unlikely to be present in the Swift duration data.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
James Howard; Rebecca Westby; Kenneth Skog
2010-01-01
This report provides a wide range of specific and statistical information on forest products markets in terms of production, trade, prices and consumption, employment, and other factors influencing forest sustainability.
ERIC Educational Resources Information Center
Vrieze, Scott I.
2012-01-01
This article reviews the Akaike information criterion (AIC) and the Bayesian information criterion (BIC) in model selection and the appraisal of psychological theory. The focus is on latent variable models, given their growing use in theory testing and construction. Theoretical statistical results in regression are discussed, and more important…
Frank, Matthias; Bockholdt, Britta; Peters, Dieter; Lange, Joern; Grossjohann, Rico; Ekkernkamp, Axel; Hinz, Peter
2011-05-20
Blunt ballistic impact trauma is a current research topic due to the widespread use of kinetic energy munitions in law enforcement. In the civilian setting, an automatic dummy launcher has recently been identified as source of blunt impact trauma. However, there is no data on the injury risk of conventional dummy launchers. It is the aim of this investigation to predict potential impact injury to the human head and chest on the basis of the Blunt Criterion which is an energy based blunt trauma model to assess vulnerability to blunt weapons, projectile impacts, and behind-armor-exposures. Based on experimentally investigated kinetic parameters, the injury risk of two commercially available gundog retrieval devices (Waidwerk Telebock, Germany; Turner Richards, United Kingdom) was assessed using the Blunt Criterion trauma model for blunt ballistic impact trauma to the head and chest. Assessing chest impact, the Blunt Criterion values for both shooting devices were higher than the critical Blunt Criterion value of 0.37, which represents a 50% risk of sustaining a thoracic skeletal injury of AIS 2 (moderate injury) or AIS 3 (serious injury). The maximum Blunt Criterion value (1.106) was higher than the Blunt Criterion value corresponding to AIS 4 (severe injury). With regard to the impact injury risk to the head, both devices surpass by far the critical Blunt Criterion value of 1.61, which represents a 50% risk of skull fracture. Highest Blunt Criterion values were measured for the Turner Richards Launcher (2.884) corresponding to a risk of skull fracture of higher than 80%. Even though the classification as non-guns by legal authorities might implicate harmlessness, the Blunt Criterion trauma model illustrates the hazardous potential of these shooting devices. The Blunt Criterion trauma model links the laboratory findings to the impact injury patterns of the head and chest that might be expected. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Variable selection with stepwise and best subset approaches
2016-01-01
While purposeful selection is performed partly by software and partly by hand, the stepwise and best subset approaches are automatically performed by software. Two R functions stepAIC() and bestglm() are well designed for stepwise and best subset regression, respectively. The stepAIC() function begins with a full or null model, and methods for stepwise regression can be specified in the direction argument with character values “forward”, “backward” and “both”. The bestglm() function begins with a data frame containing explanatory variables and response variables. The response variable should be in the last column. Varieties of goodness-of-fit criteria can be specified in the IC argument. The Bayesian information criterion (BIC) usually results in more parsimonious model than the Akaike information criterion. PMID:27162786
González-Moreno, A; Bordera, S; Leirana-Alcocer, J; Delfín-González, H
2012-06-01
The biology and behavior of insects are strongly influenced by environmental conditions such as temperature and precipitation. Because some of these factors present a within day variation, they may be causing variations on insect diurnal flight activity, but scant information exists on the issue. The aim of this work was to describe the patterns on diurnal variation of the abundance of Ichneumonoidea and their relation with relative humidity, temperature, light intensity, and wind speed. The study site was a tropical dry forest at Ría Lagartos Biosphere Reserve, Mexico; where correlations between environmental factors (relative humidity, temperature, light, and wind speed) and abundance of Ichneumonidae and Braconidae (Hymenoptera: Ichneumonoidea) were estimated. The best regression model for explaining abundance variation was selected using the second order Akaike Information Criterion. The optimum values of temperature, humidity, and light for flight activity of both families were also estimated. Ichneumonid and braconid abundances were significantly correlated to relative humidity, temperature, and light intensity; ichneumonid also showed significant correlations to wind speed. The second order Akaike Information Criterion suggests that in tropical dry conditions, relative humidity is more important that temperature for Ichneumonoidea diurnal activity. Ichneumonid wasps selected toward intermediate values of relative humidity, temperature and the lowest wind speeds; while Braconidae selected for low values of relative humidity. For light intensity, braconids presented a positive selection for moderately high values.
NASA Astrophysics Data System (ADS)
Kotchasarn, Chirawat; Saengudomlert, Poompat
We investigate the problem of joint transmitter and receiver power allocation with the minimax mean square error (MSE) criterion for uplink transmissions in a multi-carrier code division multiple access (MC-CDMA) system. The objective of power allocation is to minimize the maximum MSE among all users each of which has limited transmit power. This problem is a nonlinear optimization problem. Using the Lagrange multiplier method, we derive the Karush-Kuhn-Tucker (KKT) conditions which are necessary for a power allocation to be optimal. Numerical results indicate that, compared to the minimum total MSE criterion, the minimax MSE criterion yields a higher total MSE but provides a fairer treatment across the users. The advantages of the minimax MSE criterion are more evident when we consider the bit error rate (BER) estimates. Numerical results show that the minimax MSE criterion yields a lower maximum BER and a lower average BER. We also observe that, with the minimax MSE criterion, some users do not transmit at full power. For comparison, with the minimum total MSE criterion, all users transmit at full power. In addition, we investigate robust joint transmitter and receiver power allocation where the channel state information (CSI) is not perfect. The CSI error is assumed to be unknown but bounded by a deterministic value. This problem is formulated as a semidefinite programming (SDP) problem with bilinear matrix inequality (BMI) constraints. Numerical results show that, with imperfect CSI, the minimax MSE criterion also outperforms the minimum total MSE criterion in terms of the maximum and average BERs.
Cysewski, Piotr; Przybyłek, Maciej
2017-09-30
New theoretical screening procedure was proposed for appropriate selection of potential cocrystal formers possessing the ability of enhancing dissolution rates of drugs. The procedure relies on the training set comprising 102 positive and 17 negative cases of cocrystals found in the literature. Despite the fact that the only available data were of qualitative character, performed statistical analysis using binary classification allowed to formulate quantitative criterions. Among considered 3679 molecular descriptors the relative value of lipoaffinity index, expressed as the difference between values calculated for active compound and excipient, has been found as the most appropriate measure suited for discrimination of positive and negative cases. Assuming 5% precision, the applied classification criterion led to inclusion of 70% positive cases in the final prediction. Since lipoaffinity index is a molecular descriptor computed using only 2D information about a chemical structure, its estimation is straightforward and computationally inexpensive. The inclusion of an additional criterion quantifying the cocrystallization probability leads to the following conjunction criterions H mix <-0.18 and ΔLA>3.61, allowing for identification of dissolution rate enhancers. The screening procedure was applied for finding the most promising coformers of such drugs as Iloperidone, Ritonavir, Carbamazepine and Enthenzamide. Copyright © 2017 Elsevier B.V. All rights reserved.
Platzer, Christine; Bröder, Arndt; Heck, Daniel W
2014-05-01
Decision situations are typically characterized by uncertainty: Individuals do not know the values of different options on a criterion dimension. For example, consumers do not know which is the healthiest of several products. To make a decision, individuals can use information about cues that are probabilistically related to the criterion dimension, such as sugar content or the concentration of natural vitamins. In two experiments, we investigated how the accessibility of cue information in memory affects which decision strategy individuals rely on. The accessibility of cue information was manipulated by means of a newly developed paradigm, the spatial-memory-cueing paradigm, which is based on a combination of the looking-at-nothing phenomenon and the spatial-cueing paradigm. The results indicated that people use different decision strategies, depending on the validity of easily accessible information. If the easily accessible information is valid, people stop information search and decide according to a simple take-the-best heuristic. If, however, information that comes to mind easily has a low predictive validity, people are more likely to integrate all available cue information in a compensatory manner.
Precoded spatial multiplexing MIMO system with spatial component interleaver.
Gao, Xiang; Wu, Zhanji
In this paper, the performance of precoded bit-interleaved coded modulation (BICM) spatial multiplexing multiple-input multiple-output (MIMO) system with spatial component interleaver is investigated. For the ideal precoded spatial multiplexing MIMO system with spatial component interleaver based on singular value decomposition (SVD) of the MIMO channel, the average pairwise error probability (PEP) of coded bits is derived. Based on the PEP analysis, the optimum spatial Q-component interleaver design criterion is provided to achieve the minimum error probability. For the limited feedback precoded proposed scheme with linear zero forcing (ZF) receiver, in order to minimize a bound on the average probability of a symbol vector error, a novel effective signal-to-noise ratio (SNR)-based precoding matrix selection criterion and a simplified criterion are proposed. Based on the average mutual information (AMI)-maximization criterion, the optimal constellation rotation angles are investigated. Simulation results indicate that the optimized spatial multiplexing MIMO system with spatial component interleaver can achieve significant performance advantages compared to the conventional spatial multiplexing MIMO system.
The perspectives, information and conclusions conveyed in research project abstracts, progress reports, final reports, journal abstracts and journal publications convey the viewpoints of the principal investigator and may not represent the views and policies of ORD and EPA. Concl...
Information hidden in the velocity distribution of ions and the exact kinetic Bohm criterion
NASA Astrophysics Data System (ADS)
Tsankov, Tsanko V.; Czarnetzki, Uwe
2017-05-01
Non-equilibrium distribution functions of electrons and ions play an important role in plasma physics. A prominent example is the kinetic Bohm criterion. Since its first introduction it has been controversial for theoretical reasons and due to the lack of experimental data, in particular on the ion distribution function. Here we resolve the theoretical as well as the experimental difficulties by an exact solution of the kinetic Boltzmann equation including charge exchange collisions and ionization. This also allows for the first time non-invasive measurement of spatially resolved ion velocity distributions, absolute values of the ion and electron densities, temperatures, and mean energies as well as the electric field and the plasma potential in the entire plasma. The non-invasive access to the spatially resolved distribution functions of electrons and ions is applied to the problem of the kinetic Bohm criterion. Theoretically a so far missing term in the criterion is derived and shown to be of key importance. With the new term the validity of the kinetic criterion at high collisionality and its agreement with the fluid picture are restored. All findings are supported by experimental data, theory and a numerical model with excellent agreement throughout.
The limited use of the fluency heuristic: Converging evidence across different procedures.
Pohl, Rüdiger F; Erdfelder, Edgar; Michalkiewicz, Martha; Castela, Marta; Hilbig, Benjamin E
2016-10-01
In paired comparisons based on which of two objects has the larger criterion value, decision makers could use the subjectively experienced difference in retrieval fluency of the objects as a cue. According to the fluency heuristic (FH) theory, decision makers use fluency-as indexed by recognition speed-as the only cue for pairs of recognized objects, and infer that the object retrieved more speedily has the larger criterion value (ignoring all other cues and information). Model-based analyses, however, have previously revealed that only a small portion of such inferences are indeed based on fluency alone. In the majority of cases, other information enters the decision process. However, due to the specific experimental procedures, the estimates of FH use are potentially biased: Some procedures may have led to an overestimated and others to an underestimated, or even to actually reduced, FH use. In the present article, we discuss and test the impacts of such procedural variations by reanalyzing 21 data sets. The results show noteworthy consistency across the procedural variations revealing low FH use. We discuss potential explanations and implications of this finding.
Stochastic isotropic hyperelastic materials: constitutive calibration and model selection
NASA Astrophysics Data System (ADS)
Mihai, L. Angela; Woolley, Thomas E.; Goriely, Alain
2018-03-01
Biological and synthetic materials often exhibit intrinsic variability in their elastic responses under large strains, owing to microstructural inhomogeneity or when elastic data are extracted from viscoelastic mechanical tests. For these materials, although hyperelastic models calibrated to mean data are useful, stochastic representations accounting also for data dispersion carry extra information about the variability of material properties found in practical applications. We combine finite elasticity and information theories to construct homogeneous isotropic hyperelastic models with random field parameters calibrated to discrete mean values and standard deviations of either the stress-strain function or the nonlinear shear modulus, which is a function of the deformation, estimated from experimental tests. These quantities can take on different values, corresponding to possible outcomes of the experiments. As multiple models can be derived that adequately represent the observed phenomena, we apply Occam's razor by providing an explicit criterion for model selection based on Bayesian statistics. We then employ this criterion to select a model among competing models calibrated to experimental data for rubber and brain tissue under single or multiaxial loads.
Nicolas, Renaud; Sibon, Igor; Hiba, Bassem
2015-01-01
The diffusion-weighted-dependent attenuation of the MRI signal E(b) is extremely sensitive to microstructural features. The aim of this study was to determine which mathematical model of the E(b) signal most accurately describes it in the brain. The models compared were the monoexponential model, the stretched exponential model, the truncated cumulant expansion (TCE) model, the biexponential model, and the triexponential model. Acquisition was performed with nine b-values up to 2500 s/mm(2) in 12 healthy volunteers. The goodness-of-fit was studied with F-tests and with the Akaike information criterion. Tissue contrasts were differentiated with a multiple comparison corrected nonparametric analysis of variance. F-test showed that the TCE model was better than the biexponential model in gray and white matter. Corrected Akaike information criterion showed that the TCE model has the best accuracy and produced the most reliable contrasts in white matter among all models studied. In conclusion, the TCE model was found to be the best model to infer the microstructural properties of brain tissue.
Sun, Min; Wong, David; Kronenfeld, Barry
2016-01-01
Despite conceptual and technology advancements in cartography over the decades, choropleth map design and classification fail to address a fundamental issue: estimates that are statistically indifferent may be assigned to different classes on maps or vice versa. Recently, the class separability concept was introduced as a map classification criterion to evaluate the likelihood that estimates in two classes are statistical different. Unfortunately, choropleth maps created according to the separability criterion usually have highly unbalanced classes. To produce reasonably separable but more balanced classes, we propose a heuristic classification approach to consider not just the class separability criterion but also other classification criteria such as evenness and intra-class variability. A geovisual-analytic package was developed to support the heuristic mapping process to evaluate the trade-off between relevant criteria and to select the most preferable classification. Class break values can be adjusted to improve the performance of a classification. PMID:28286426
CA-125 AUC as a predictor for epithelial ovarian cancer relapse.
Mano, António; Falcão, Amílcar; Godinho, Isabel; Santos, Jorge; Leitão, Fátima; de Oliveira, Carlos; Caramona, Margarida
2008-01-01
The aim of the present work was to evaluate the usefulness of CA-125 normalized in time area under the curve (CA-125 AUC) to signalise epithelial ovarian cancer relapse. Data from a hundred and eleven patients were submitted to two different approaches based on CA-125 AUC increase values to predict patient relapse. In Criterion A total CA-125 AUC normalized in time value (AUC(i)) was compared with the immediately previous one (AUC(i-1)) using the formulae AUC(i) > or = F * AUC(i-1) (several F values were tested) to find the appropriate close related increment associated to patient relapse. In Criterion B total CA-125 AUC normalised in time was calculated and several cut-off values were correlated with patient relapse prediction capacity. In Criterion A the best accuracy was achieved with a factor (F) of 1.25 (increment of 25% from the previous status), while in Criterion B the best accuracies were achieved with cut-offs of 25, 50, 75 and 100 IU/mL. The mean lead time to relapse achieved with Criterion A was 181 days, while with Criterion B they were, respectively, 131, 111, 63 and 11 days. Based on our results we believe that conjugation and sequential application of both criteria in patient relapse detection should be highly advisable. CA-125 AUC rapid burst in asymptomatic patients should be firstly evaluated using Criterion A with a high accuracy (0.85) and with a substantial mean lead time to relapse (181 days). If a negative answer was obtained then Criterion B should performed to confirm the absence of relapse.
The stopping rules for winsorized tree
NASA Astrophysics Data System (ADS)
Ch'ng, Chee Keong; Mahat, Nor Idayu
2017-11-01
Winsorized tree is a modified tree-based classifier that is able to investigate and to handle all outliers in all nodes along the process of constructing the tree. It overcomes the tedious process of constructing a classical tree where the splitting of branches and pruning go concurrently so that the constructed tree would not grow bushy. This mechanism is controlled by the proposed algorithm. In winsorized tree, data are screened for identifying outlier. If outlier is detected, the value is neutralized using winsorize approach. Both outlier identification and value neutralization are executed recursively in every node until predetermined stopping criterion is met. The aim of this paper is to search for significant stopping criterion to stop the tree from further splitting before overfitting. The result obtained from the conducted experiment on pima indian dataset proved that the node could produce the final successor nodes (leaves) when it has achieved the range of 70% in information gain.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud.
Zia Ullah, Qazi; Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers.
Adaptive Resource Utilization Prediction System for Infrastructure as a Service Cloud
Hassan, Shahzad; Khan, Gul Muhammad
2017-01-01
Infrastructure as a Service (IaaS) cloud provides resources as a service from a pool of compute, network, and storage resources. Cloud providers can manage their resource usage by knowing future usage demand from the current and past usage patterns of resources. Resource usage prediction is of great importance for dynamic scaling of cloud resources to achieve efficiency in terms of cost and energy consumption while keeping quality of service. The purpose of this paper is to present a real-time resource usage prediction system. The system takes real-time utilization of resources and feeds utilization values into several buffers based on the type of resources and time span size. Buffers are read by R language based statistical system. These buffers' data are checked to determine whether their data follows Gaussian distribution or not. In case of following Gaussian distribution, Autoregressive Integrated Moving Average (ARIMA) is applied; otherwise Autoregressive Neural Network (AR-NN) is applied. In ARIMA process, a model is selected based on minimum Akaike Information Criterion (AIC) values. Similarly, in AR-NN process, a network with the lowest Network Information Criterion (NIC) value is selected. We have evaluated our system with real traces of CPU utilization of an IaaS cloud of one hundred and twenty servers. PMID:28811819
Prediction of Hot Tearing Using a Dimensionless Niyama Criterion
NASA Astrophysics Data System (ADS)
Monroe, Charles; Beckermann, Christoph
2014-08-01
The dimensionless form of the well-known Niyama criterion is extended to include the effect of applied strain. Under applied tensile strain, the pressure drop in the mushy zone is enhanced and pores grow beyond typical shrinkage porosity without deformation. This porosity growth can be expected to align perpendicular to the applied strain and to contribute to hot tearing. A model to capture this coupled effect of solidification shrinkage and applied strain on the mushy zone is derived. The dimensionless Niyama criterion can be used to determine the critical liquid fraction value below which porosity forms. This critical value is a function of alloy properties, solidification conditions, and strain rate. Once a dimensionless Niyama criterion value is obtained from thermal and mechanical simulation results, the corresponding shrinkage and deformation pore volume fractions can be calculated. The novelty of the proposed method lies in using the critical liquid fraction at the critical pressure drop within the mushy zone to determine the onset of hot tearing. The magnitude of pore growth due to shrinkage and deformation is plotted as a function of the dimensionless Niyama criterion for an Al-Cu alloy as an example. Furthermore, a typical hot tear "lambda"-shaped curve showing deformation pore volume as a function of alloy content is produced for two Niyama criterion values.
MMA, A Computer Code for Multi-Model Analysis
Poeter, Eileen P.; Hill, Mary C.
2007-01-01
This report documents the Multi-Model Analysis (MMA) computer code. MMA can be used to evaluate results from alternative models of a single system using the same set of observations for all models. As long as the observations, the observation weighting, and system being represented are the same, the models can differ in nearly any way imaginable. For example, they may include different processes, different simulation software, different temporal definitions (for example, steady-state and transient models could be considered), and so on. The multiple models need to be calibrated by nonlinear regression. Calibration of the individual models needs to be completed before application of MMA. MMA can be used to rank models and calculate posterior model probabilities. These can be used to (1) determine the relative importance of the characteristics embodied in the alternative models, (2) calculate model-averaged parameter estimates and predictions, and (3) quantify the uncertainty of parameter estimates and predictions in a way that integrates the variations represented by the alternative models. There is a lack of consensus on what model analysis methods are best, so MMA provides four default methods. Two are based on Kullback-Leibler information, and use the AIC (Akaike Information Criterion) or AICc (second-order-bias-corrected AIC) model discrimination criteria. The other two default methods are the BIC (Bayesian Information Criterion) and the KIC (Kashyap Information Criterion) model discrimination criteria. Use of the KIC criterion is equivalent to using the maximum-likelihood Bayesian model averaging (MLBMA) method. AIC, AICc, and BIC can be derived from Frequentist or Bayesian arguments. The default methods based on Kullback-Leibler information have a number of theoretical advantages, including that they tend to favor more complicated models as more data become available than do the other methods, which makes sense in many situations. Many applications of MMA will be well served by the default methods provided. To use the default methods, the only required input for MMA is a list of directories where the files for the alternate models are located. Evaluation and development of model-analysis methods are active areas of research. To facilitate exploration and innovation, MMA allows the user broad discretion to define alternatives to the default procedures. For example, MMA allows the user to (a) rank models based on model criteria defined using a wide range of provided and user-defined statistics in addition to the default AIC, AICc, BIC, and KIC criteria, (b) create their own criteria using model measures available from the code, and (c) define how each model criterion is used to calculate related posterior model probabilities. The default model criteria rate models are based on model fit to observations, the number of observations and estimated parameters, and, for KIC, the Fisher information matrix. In addition, MMA allows the analysis to include an evaluation of estimated parameter values. This is accomplished by allowing the user to define unreasonable estimated parameter values or relative estimated parameter values. An example of the latter is that it may be expected that one parameter value will be less than another, as might be the case if two parameters represented the hydraulic conductivity of distinct materials such as fine and coarse sand. Models with parameter values that violate the user-defined conditions are excluded from further consideration by MMA. Ground-water models are used as examples in this report, but MMA can be used to evaluate any set of models for which the required files have been produced. MMA needs to read files from a separate directory for each alternative model considered. The needed files are produced when using the Sensitivity-Analysis or Parameter-Estimation mode of UCODE_2005, or, possibly, the equivalent capability of another program. MMA is constructed using
Brookes, V J; Hernández-Jover, M; Neslo, R; Cowled, B; Holyoake, P; Ward, M P
2014-01-01
We describe stakeholder preference modelling using a combination of new and recently developed techniques to elicit criterion weights to incorporate into a multi-criteria decision analysis framework to prioritise exotic diseases for the pig industry in Australia. Australian pig producers were requested to rank disease scenarios comprising nine criteria in an online questionnaire. Parallel coordinate plots were used to visualise stakeholder preferences, which aided identification of two diverse groups of stakeholders - one group prioritised diseases with impacts on livestock, and the other group placed more importance on diseases with zoonotic impacts. Probabilistic inversion was used to derive weights for the criteria to reflect the values of each of these groups, modelling their choice using a weighted sum value function. Validation of weights against stakeholders' rankings for scenarios based on real diseases showed that the elicited criterion weights for the group who prioritised diseases with livestock impacts were a good reflection of their values, indicating that the producers were able to consistently infer impacts from the disease information in the scenarios presented to them. The highest weighted criteria for this group were attack rate and length of clinical disease in pigs, and market loss to the pig industry. The values of the stakeholders who prioritised zoonotic diseases were less well reflected by validation, indicating either that the criteria were inadequate to consistently describe zoonotic impacts, the weighted sum model did not describe stakeholder choice, or that preference modelling for zoonotic diseases should be undertaken separately from livestock diseases. Limitations of this study included sampling bias, as the group participating were not necessarily representative of all pig producers in Australia, and response bias within this group. The method used to elicit criterion weights in this study ensured value trade-offs between a range of potential impacts, and that the weights were implicitly related to the scale of measurement of disease criteria. Validation of the results of the criterion weights against real diseases - a step rarely used in MCDA - added scientific rigour to the process. The study demonstrated that these are useful techniques for elicitation of criterion weights for disease prioritisation by stakeholders who are not disease experts. Preference modelling for zoonotic diseases needs further characterisation in this context. Copyright © 2013 Elsevier B.V. All rights reserved.
Takahashi, M; Onozawa, S; Ogawa, R; Uesawa, Y; Echizen, H
2015-02-01
Clinical pharmacists have a challenging task when answering patients' question about whether they can take specific drugs with grapefruit juice (GFJ) without risk of drug interaction. To identify the most practicable method for predicting clinically relevant changes in plasma concentrations of orally administered drugs caused by the ingestion of GFJ, we compared the predictive performance of three methods using data obtained from the literature. We undertook a systematic search of drug interactions associated with GFJ using MEDLINE and the Metabolism & Transport Drug Interaction Database (DIDB version 4.0). We considered an elevation of the area under the plasma concentration-time curve (AUC) of 2 or greater relative to the control value [AUC ratio (AUCR) ≥ 2.0] as a clinically significant interaction. The data from 74 drugs (194 data sets) were analysed. When the reported information of CYP3A involvement in the metabolism of a drug of interest was adopted as a predictive criterion for GFJ-drug interaction, the performance assessed by positive predictive value (PPV) was low (0.26), but that assessed by negative predictive value (NPV) and sensitivity was high (1.00 for both). When the reported oral bioavailability of ≤ 0.1 was used as a criterion, the PPV improved to 0.50 with an acceptable NPV of 0.81, but sensitivity was reduced to 0.21. When the reported AUCR was ≥ 10 after co-administration of a typical CYP3A inhibitor, the corresponding values were 0.64, 0.79 and 0.19, respectively. We consider that an oral bioavailability of ≤ 0.1 or an AUCR of ≥ 10 caused by a CYP3A inhibitor of a drug of interest may be a practical prediction criterion for avoiding significant interactions with GFJ. Information about the involvement of CYP3A in their metabolism should also be taken into account for drugs with narrow therapeutic ranges. © 2014 John Wiley & Sons Ltd.
NASA Astrophysics Data System (ADS)
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Final Environmental Assessment of Military Service Station Privatization at Five AETC Installations
2013-10-01
distinction (Criterion C); or • Have yielded, or may likely yield, information important in prehistory or history (Criterion D). Resources less than 50...important information in history or prehistory ; thus, it does not meet the requirement of Criterion D. Building 2109 is recommended not eligible for
Fong, Ted C T; Ho, Rainbow T H
2015-01-01
The aim of this study was to reexamine the dimensionality of the widely used 9-item Utrecht Work Engagement Scale using the maximum likelihood (ML) approach and Bayesian structural equation modeling (BSEM) approach. Three measurement models (1-factor, 3-factor, and bi-factor models) were evaluated in two split samples of 1,112 health-care workers using confirmatory factor analysis and BSEM, which specified small-variance informative priors for cross-loadings and residual covariances. Model fit and comparisons were evaluated by posterior predictive p-value (PPP), deviance information criterion, and Bayesian information criterion (BIC). None of the three ML-based models showed an adequate fit to the data. The use of informative priors for cross-loadings did not improve the PPP for the models. The 1-factor BSEM model with approximately zero residual covariances displayed a good fit (PPP>0.10) to both samples and a substantially lower BIC than its 3-factor and bi-factor counterparts. The BSEM results demonstrate empirical support for the 1-factor model as a parsimonious and reasonable representation of work engagement.
Numerical and Experimental Validation of a New Damage Initiation Criterion
NASA Astrophysics Data System (ADS)
Sadhinoch, M.; Atzema, E. H.; Perdahcioglu, E. S.; van den Boogaard, A. H.
2017-09-01
Most commercial finite element software packages, like Abaqus, have a built-in coupled damage model where a damage evolution needs to be defined in terms of a single fracture energy value for all stress states. The Johnson-Cook criterion has been modified to be Lode parameter dependent and this Modified Johnson-Cook (MJC) criterion is used as a Damage Initiation Surface (DIS) in combination with the built-in Abaqus ductile damage model. An exponential damage evolution law has been used with a single fracture energy value. Ultimately, the simulated force-displacement curves are compared with experiments to validate the MJC criterion. 7 out of 9 fracture experiments were predicted accurately. The limitations and accuracy of the failure predictions of the newly developed damage initiation criterion will be discussed shortly.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M; Lee, Vo
2014-04-15
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
Adaptive model training system and method
Bickford, Randall L; Palnitkar, Rahul M
2014-11-18
An adaptive model training system and method for filtering asset operating data values acquired from a monitored asset for selectively choosing asset operating data values that meet at least one predefined criterion of good data quality while rejecting asset operating data values that fail to meet at least the one predefined criterion of good data quality; and recalibrating a previously trained or calibrated model having a learned scope of normal operation of the asset by utilizing the asset operating data values that meet at least the one predefined criterion of good data quality for adjusting the learned scope of normal operation of the asset for defining a recalibrated model having the adjusted learned scope of normal operation of the asset.
ERIC Educational Resources Information Center
Ding, Cody S.; Davison, Mark L.
2010-01-01
Akaike's information criterion is suggested as a tool for evaluating fit and dimensionality in metric multidimensional scaling that uses least squares methods of estimation. This criterion combines the least squares loss function with the number of estimated parameters. Numerical examples are presented. The results from analyses of both simulation…
A Novel Statistical Analysis and Interpretation of Flow Cytometry Data
2013-03-31
the resulting residuals appear random. In the work that follows, I∗ = 200. The values of B and b̂j are known from the experiment. Notice that the...conjunction with the model parameter vector in a two- stage process. Unfortunately two- stage estimation may cause some parameters of the mathematical model to...information theoretic criteria such as Akaike’s Information Criterion (AIC). From (4.3), it follows that the scaled residuals rjk = λjI[n̂](tj , zk; ~q
NASA Astrophysics Data System (ADS)
He, Lirong; Cui, Guangmang; Feng, Huajun; Xu, Zhihai; Li, Qi; Chen, Yueting
2015-03-01
Coded exposure photography makes the motion de-blurring a well-posed problem. The integration pattern of light is modulated using the method of coded exposure by opening and closing the shutter within the exposure time, changing the traditional shutter frequency spectrum into a wider frequency band in order to preserve more image information in frequency domain. The searching method of optimal code is significant for coded exposure. In this paper, an improved criterion of the optimal code searching is proposed by analyzing relationship between code length and the number of ones in the code, considering the noise effect on code selection with the affine noise model. Then the optimal code is obtained utilizing the method of genetic searching algorithm based on the proposed selection criterion. Experimental results show that the time consuming of searching optimal code decreases with the presented method. The restoration image is obtained with better subjective experience and superior objective evaluation values.
Pak, Mehmet; Gülci, Sercan; Okumuş, Arif
2018-01-06
This study focuses on the geo-statistical assessment of spatial estimation models in forest crimes. Used widely in the assessment of crime and crime-dependent variables, geographic information system (GIS) helps the detection of forest crimes in rural regions. In this study, forest crimes (forest encroachment, illegal use, illegal timber logging, etc.) are assessed holistically and modeling was performed with ten different independent variables in GIS environment. The research areas are three Forest Enterprise Chiefs (Baskonus, Cinarpinar, and Hartlap) affiliated to Kahramanmaras Forest Regional Directorate in Kahramanmaras. An estimation model was designed using ordinary least squares (OLS) and geographically weighted regression (GWR) methods, which are often used in spatial association. Three different models were proposed in order to increase the accuracy of the estimation model. The use of variables with a variance inflation factor (VIF) value of lower than 7.5 in Model I and lower than 4 in Model II and dependent variables with significant robust probability values in Model III are associated with forest crimes. Afterwards, the model with the lowest corrected Akaike Information Criterion (AIC c ), and the highest R 2 value was selected as the comparison criterion. Consequently, Model III proved to be more accurate compared to other models. For Model III, while AIC c was 328,491 and R 2 was 0.634 for OLS-3 model, AIC c was 318,489 and R 2 was 0.741 for GWR-3 model. In this respect, the uses of GIS for combating forest crimes provide different scenarios and tangible information that will help take political and strategic measures.
Guidelines for Interpreting and Reporting Subscores
ERIC Educational Resources Information Center
Feinberg, Richard A.; Jurich, Daniel P.
2017-01-01
Recent research has proposed a criterion to evaluate the reportability of subscores. This criterion is a value-added ratio ("VAR"), where values greater than 1 suggest that the true subscore is better approximated by the observed subscore than by the total score. This research extends the existing literature by quantifying statistical…
Ercanli, İlker; Kahriman, Aydın
2015-03-01
We assessed the effect of stand structural diversity, including the Shannon, improved Shannon, Simpson, McIntosh, Margelef, and Berger-Parker indices, on stand aboveground biomass (AGB) and developed statistical prediction models for the stand AGB values, including stand structural diversity indices and some stand attributes. The AGB prediction model, including only stand attributes, accounted for 85 % of the total variance in AGB (R (2)) with an Akaike's information criterion (AIC) of 807.2407, Bayesian information criterion (BIC) of 809.5397, Schwarz Bayesian criterion (SBC) of 818.0426, and root mean square error (RMSE) of 38.529 Mg. After inclusion of the stand structural diversity into the model structure, considerable improvement was observed in statistical accuracy, including 97.5 % of the total variance in AGB, with an AIC of 614.1819, BIC of 617.1242, SBC of 633.0853, and RMSE of 15.8153 Mg. The predictive fitting results indicate that some indices describing the stand structural diversity can be employed as significant independent variables to predict the AGB production of the Scotch pine stand. Further, including the stand diversity indices in the AGB prediction model with the stand attributes provided important predictive contributions in estimating the total variance in AGB.
Model selection with multiple regression on distance matrices leads to incorrect inferences.
Franckowiak, Ryan P; Panasci, Michael; Jarvis, Karl J; Acuña-Rodriguez, Ian S; Landguth, Erin L; Fortin, Marie-Josée; Wagner, Helene H
2017-01-01
In landscape genetics, model selection procedures based on Information Theoretic and Bayesian principles have been used with multiple regression on distance matrices (MRM) to test the relationship between multiple vectors of pairwise genetic, geographic, and environmental distance. Using Monte Carlo simulations, we examined the ability of model selection criteria based on Akaike's information criterion (AIC), its small-sample correction (AICc), and the Bayesian information criterion (BIC) to reliably rank candidate models when applied with MRM while varying the sample size. The results showed a serious problem: all three criteria exhibit a systematic bias toward selecting unnecessarily complex models containing spurious random variables and erroneously suggest a high level of support for the incorrectly ranked best model. These problems effectively increased with increasing sample size. The failure of AIC, AICc, and BIC was likely driven by the inflated sample size and different sum-of-squares partitioned by MRM, and the resulting effect on delta values. Based on these findings, we strongly discourage the continued application of AIC, AICc, and BIC for model selection with MRM.
Analysis of survival in breast cancer patients by using different parametric models
NASA Astrophysics Data System (ADS)
Enera Amran, Syahila; Asrul Afendi Abdullah, M.; Kek, Sie Long; Afiqah Muhamad Jamil, Siti
2017-09-01
In biomedical applications or clinical trials, right censoring was often arising when studying the time to event data. In this case, some individuals are still alive at the end of the study or lost to follow up at a certain time. It is an important issue to handle the censoring data in order to prevent any bias information in the analysis. Therefore, this study was carried out to analyze the right censoring data with three different parametric models; exponential model, Weibull model and log-logistic models. Data of breast cancer patients from Hospital Sultan Ismail, Johor Bahru from 30 December 2008 until 15 February 2017 was used in this study to illustrate the right censoring data. Besides, the covariates included in this study are the time of breast cancer infection patients survive t, age of each patients X1 and treatment given to the patients X2 . In order to determine the best parametric models in analysing survival of breast cancer patients, the performance of each model was compare based on Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and log-likelihood value using statistical software R. When analysing the breast cancer data, all three distributions were shown consistency of data with the line graph of cumulative hazard function resembles a straight line going through the origin. As the result, log-logistic model was the best fitted parametric model compared with exponential and Weibull model since it has the smallest value in AIC and BIC, also the biggest value in log-likelihood.
Wu, Nan; Yuan, Suomao; Liu, Jiaqi; Chen, Jun; Fei, Qi; Liu, Sen; Su, Xinlin; Wang, Shengru; Zhang, Jianguo; Li, Shugang; Wang, Yipeng; Qiu, Guixing; Wu, Zhihong
2014-10-01
A genetic association study of single nucleotide polymorphisms (SNPs) for the LMX1A gene with congenital scoliosis (CS) in the Chinese Han population. To determine whether LMX1A genetic polymorphisms are associated with susceptibility to CS. CS is a lateral curvature of the spine due to congenital vertebral defects, whose exact genetic cause has not been well established. The LMX1A gene was suggested as a potential human candidate gene for CS. However, no genetic study of LMX1A in CS has ever been reported. We genotyped 13 SNPs of the LMX1A gene in 154 patients with CS and 144 controls with matched sex and age. After conducting the Hardy-Weinberg equilibrium test, the data of 13 SNPs were analyzed by the allelic and genotypic association with logistic regression analysis. Furthermore, the genotype-phenotype association and haplotype association analysis were also performed. The 13 SNPs of the LMX1A gene met Hardy-Weinberg equilibrium in the controls, which was not in the cases. None of the allelic and genotypic frequencies of these SNPs showed significant difference between case and control groups (P > 0.05). However, the genotypic frequencies of rs1354510 and rs16841013 in the LMX1A gene were associated with CS predisposition in the unconditional logistic regression analysis (P = 0.02 and 0.018, respectively). Genotypic frequencies of 3 SNPs at rs6671290, rs1354510, and rs16841013 were found to exhibit significant differences between patients with CS with failure of formation and the healthy controls (P = 0.019, 0.007, and 0.006, respectively). Besides, in the model analysis by using unconditional logistic regression analysis, the optimized model for the 3 genotypic positive SNPs with failure of formation were rs6671290 (codominant; P = 0.025, Akaike information value = 316.6, Bayesian information criterion = 333.9), rs1354510 (overdominant; P = 0.0017, Akaike information value = 312.1, Bayesian information criterion = 325.9), and rsl6841013 (overdominant; P = 0.0016, Akaike information value = 311.1, Bayesian information criterion = 325), respectively. However, the haplotype distributions in the case group were not significantly different from those of the control group in the 3 haplotype blocks. To our knowledge, this is the first study to identify that the SNPs of the LMX1A gene might be associated with the susceptibility to CS and different clinical phenotypes of CS in the Chinese Han population. 4.
NASA Astrophysics Data System (ADS)
Lehmann, Rüdiger; Lösler, Michael
2017-12-01
Geodetic deformation analysis can be interpreted as a model selection problem. The null model indicates that no deformation has occurred. It is opposed to a number of alternative models, which stipulate different deformation patterns. A common way to select the right model is the usage of a statistical hypothesis test. However, since we have to test a series of deformation patterns, this must be a multiple test. As an alternative solution for the test problem, we propose the p-value approach. Another approach arises from information theory. Here, the Akaike information criterion (AIC) or some alternative is used to select an appropriate model for a given set of observations. Both approaches are discussed and applied to two test scenarios: A synthetic levelling network and the Delft test data set. It is demonstrated that they work but behave differently, sometimes even producing different results. Hypothesis tests are well-established in geodesy, but may suffer from an unfavourable choice of the decision error rates. The multiple test also suffers from statistical dependencies between the test statistics, which are neglected. Both problems are overcome by applying information criterions like AIC.
Criteria for clinical audit of women friendly care and providers' perception in Malawi.
Kongnyuy, Eugene J; van den Broek, Nynke
2008-07-22
There are two dimensions of quality of maternity care, namely quality of health outcomes and quality as perceived by clients. The feasibility of using clinical audit to assess and improve the quality of maternity care as perceived by women was studied in Malawi. We sought to (a) establish standards for women friendly care and (b) explore attitudinal barriers which could impede the proper implementation of clinical audit. We used evidence from Malawi national guidelines and World Health Organisation manuals to establish local standards for women friendly care in three districts. We equally conducted a survey of health care providers to explore their attitudes towards criterion based audit. The standards addressed different aspects of care given to women in maternity units, namely (i) reception, (ii) attitudes towards women, (iii) respect for culture, (iv) respect for women, (v) waiting time, (vi) enabling environment, (vii) provision of information, (viii) individualised care, (ix) provision of skilled attendance at birth and emergency obstetric care, (x) confidentiality, and (xi) proper management of patient information. The health providers in Malawi generally held a favourable attitude towards clinical audit: 100.0% (54/54) agreed that criterion based audit will improve the quality of care and 92.6% believed that clinical audit is a good educational tool. However, there are concerns that criterion based audit would create a feeling of blame among providers (35.2%), and that manager would use clinical audit to identify and punish providers who fail to meet standards (27.8%). Developing standards of maternity care that are acceptable to, and valued by, women requires consideration of both the research evidence and cultural values. Clinical audit is acceptable to health professionals in Malawi although there are concerns about its negative implications to the providers.
Criteria for clinical audit of women friendly care and providers' perception in Malawi
Kongnyuy, Eugene J; van den Broek, Nynke
2008-01-01
Background There are two dimensions of quality of maternity care, namely quality of health outcomes and quality as perceived by clients. The feasibility of using clinical audit to assess and improve the quality of maternity care as perceived by women was studied in Malawi. Objective We sought to (a) establish standards for women friendly care and (b) explore attitudinal barriers which could impede the proper implementation of clinical audit. Methods We used evidence from Malawi national guidelines and World Health Organisation manuals to establish local standards for women friendly care in three districts. We equally conducted a survey of health care providers to explore their attitudes towards criterion based audit. Results The standards addressed different aspects of care given to women in maternity units, namely (i) reception, (ii) attitudes towards women, (iii) respect for culture, (iv) respect for women, (v) waiting time, (vi) enabling environment, (vii) provision of information, (viii) individualised care, (ix) provision of skilled attendance at birth and emergency obstetric care, (x) confidentiality, and (xi) proper management of patient information. The health providers in Malawi generally held a favourable attitude towards clinical audit: 100.0% (54/54) agreed that criterion based audit will improve the quality of care and 92.6% believed that clinical audit is a good educational tool. However, there are concerns that criterion based audit would create a feeling of blame among providers (35.2%), and that manager would use clinical audit to identify and punish providers who fail to meet standards (27.8%). Conclusion Developing standards of maternity care that are acceptable to, and valued by, women requires consideration of both the research evidence and cultural values. Clinical audit is acceptable to health professionals in Malawi although there are concerns about its negative implications to the providers. PMID:18647388
Mulder, Han A; Rönnegård, Lars; Fikse, W Freddy; Veerkamp, Roel F; Strandberg, Erling
2013-07-04
Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike's information criterion using h-likelihood to select the best fitting model. We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike's information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike's information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring.
Criterion-Related Validity: Assessing the Value of Subscores
ERIC Educational Resources Information Center
Davison, Mark L.; Davenport, Ernest C., Jr.; Chang, Yu-Feng; Vue, Kory; Su, Shiyang
2015-01-01
Criterion-related profile analysis (CPA) can be used to assess whether subscores of a test or test battery account for more criterion variance than does a single total score. Application of CPA to subscore evaluation is described, compared to alternative procedures, and illustrated using SAT data. Considerations other than validity and reliability…
A Review of the CTOA/CTOD Fracture Criterion: Why it Works
NASA Technical Reports Server (NTRS)
Newman, J. C., Jr.; James, M. A.
2001-01-01
The CTOA/CTOD fracture criterion is one of the oldest fracture criteria applied to fracture of metallic materials with cracks. During the past two decades, the use of elastic-plastic finite-element analyses to simulate fracture of laboratory specimens and structural components using the CTOA criterion has expanded rapidly. But the early applications were restricted to two-dimensional analyses, assuming either plane-stress or plane-strain behavior, which lead to generally non-constant values of CTOA, especially in the early stages crack extension. Later, the non-constant CTOA values were traced to inappropriate state-of-stress (or constraint) assumptions in the crack-front region and severe crack tunneling in thin-sheet materials. More recently, the CTOA fracture criterion has been used with three-dimensional analyses to study constraint effects, crack tunneling, and the fracture process. The constant CTOA criterion (from crack initiation to failure) has been successfully applied to numerous structural applications, such as aircraft fuselages and pipelines. But why does the "constant CTOA" fracture criterion work so well? This paper reviews the results from several studies, discusses the issues of why CTOA works, and discusses its limitations.
Comparison of case note review methods for evaluating quality and safety in health care.
Hutchinson, A; Coster, J E; Cooper, K L; McIntosh, A; Walters, S J; Bath, P A; Pearson, M; Young, T A; Rantell, K; Campbell, M J; Ratcliffe, J
2010-02-01
To determine which of two methods of case note review--holistic (implicit) and criterion-based (explicit)--provides the most useful and reliable information for quality and safety of care, and the level of agreement within and between groups of health-care professionals when they use the two methods to review the same record. To explore the process-outcome relationship between holistic and criterion-based quality-of-care measures and hospital-level outcome indicators. Case notes of patients at randomly selected hospitals in England. In the first part of the study, retrospective multiple reviews of 684 case notes were undertaken at nine acute hospitals using both holistic and criterion-based review methods. Quality-of-care measures included evidence-based review criteria and a quality-of-care rating scale. Textual commentary on the quality of care was provided as a component of holistic review. Review teams comprised combinations of: doctors (n = 16), specialist nurses (n = 10) and clinically trained audit staff (n = 3) and non-clinical audit staff (n = 9). In the second part of the study, process (quality and safety) of care data were collected from the case notes of 1565 people with either chronic obstructive pulmonary disease (COPD) or heart failure in 20 hospitals. Doctors collected criterion-based data from case notes and used implicit review methods to derive textual comments on the quality of care provided and score the care overall. Data were analysed for intrarater consistency, inter-rater reliability between pairs of staff using intraclass correlation coefficients (ICCs) and completeness of criterion data capture, and comparisons were made within and between staff groups and between review methods. To explore the process-outcome relationship, a range of publicly available health-care indicator data were used as proxy outcomes in a multilevel analysis. Overall, 1473 holistic and 1389 criterion-based reviews were undertaken in the first part of the study. When same staff-type reviewer pairs/groups reviewed the same record, holistic scale score inter-rater reliability was moderate within each of the three staff groups [intraclass correlation coefficient (ICC) 0.46-0.52], and inter-rater reliability for criterion-based scores was moderate to good (ICC 0.61-0.88). When different staff-type pairs/groups reviewed the same record, agreement between the reviewer pairs/groups was weak to moderate for overall care (ICC 0.24-0.43). Comparison of holistic review score and criterion-based score of case notes reviewed by doctors and by non-clinical audit staff showed a reasonable level of agreement (p-values for difference 0.406 and 0.223, respectively), although results from all three staff types showed no overall level of agreement (p-value for difference 0.057). Detailed qualitative analysis of the textual data indicated that the three staff types tended to provide different forms of commentary on quality of care, although there was some overlap between some groups. In the process-outcome study there generally were high criterion-based scores for all hospitals, whereas there was more interhospital variation between the holistic review overall scale scores. Textual commentary on the quality of care verified the holistic scale scores. Differences among hospitals with regard to the relationship between mortality and quality of care were not statistically significant. Using the holistic approach, the three groups of staff appeared to interpret the recorded care differently when they each reviewed the same record. When the same clinical record was reviewed by doctors and non-clinical audit staff, there was no significant difference between the assessments of quality of care generated by the two groups. All three staff groups performed reasonably well when using criterion-based review, although the quality and type of information provided by doctors was of greater value. Therefore, when measuring quality of care from case notes, consideration needs to be given to the method of review, the type of staff undertaking the review, and the methods of analysis available to the review team. Review can be enhanced using a combination of both criterion-based and structured holistic methods with textual commentary, and variation in quality of care can best be identified from a combination of holistic scale scores and textual data review.
Sindall, Paul; Lenton, John P.; Whytock, Katie; Tolfrey, Keith; Oyster, Michelle L.; Cooper, Rory A.; Goosey-Tolfrey, Victoria L.
2013-01-01
Purpose To compare the criterion validity and accuracy of a 1 Hz non-differential global positioning system (GPS) and data logger device (DL) for the measurement of wheelchair tennis court movement variables. Methods Initial validation of the DL device was performed. GPS and DL were fitted to the wheelchair and used to record distance (m) and speed (m/second) during (a) tennis field (b) linear track, and (c) match-play test scenarios. Fifteen participants were monitored at the Wheelchair British Tennis Open. Results Data logging validation showed underestimations for distance in right (DLR) and left (DLL) logging devices at speeds >2.5 m/second. In tennis-field tests, GPS underestimated distance in five drills. DLL was lower than both (a) criterion and (b) DLR in drills moving forward. Reversing drill direction showed that DLR was lower than (a) criterion and (b) DLL. GPS values for distance and average speed for match play were significantly lower than equivalent values obtained by DL (distance: 2816 (844) vs. 3952 (1109) m, P = 0.0001; average speed: 0.7 (0.2) vs. 1.0 (0.2) m/second, P = 0.0001). Higher peak speeds were observed in DL (3.4 (0.4) vs. 3.1 (0.5) m/second, P = 0.004) during tennis match play. Conclusions Sampling frequencies of 1 Hz are too low to accurately measure distance and speed during wheelchair tennis. GPS units with a higher sampling rate should be advocated in further studies. Modifications to existing DL devices may be required to increase measurement precision. Further research into the validity of movement devices during match play will further inform the demands and movement patterns associated with wheelchair tennis. PMID:23820154
Swedish PE Teachers Struggle with Assessment in a Criterion-Referenced Grading System
ERIC Educational Resources Information Center
Svennberg, Lena; Meckbach, Jane; Redelius, Karin
2018-01-01
In the field of education, the international trend is to turn to criterion-referenced grading in the hope of achieving accountable and consistent grades. Despite a national criterion-referenced grading system emphasising knowledge as the only base for grading, Swedish physical education (PE) grades have been shown to value non-knowledge factors,…
Study on the criterion to determine the bottom deployment modes of a coilable mast
NASA Astrophysics Data System (ADS)
Ma, Haibo; Huang, Hai; Han, Jianbin; Zhang, Wei; Wang, Xinsheng
2017-12-01
A practical design criterion that allows the coilable mast bottom to deploy in local coil mode was proposed. The criterion was defined with initial bottom helical angle and obtained by bottom deformation analyses. Discretizing the longerons into short rods, analyses were conducted based on the cylinder assumption and Kirchhoff's kinetic analogy theory. Then, iterative calculations aiming at the bottom four rods were carried out. A critical bottom helical angle was obtained while the angle changing rate equaled to zero. The critical value was defined as a criterion for judgement of bottom deployment mode. Subsequently, micro-gravity deployment tests were carried out and bottom deployment simulations based on finite element method were developed. Through comparisons of bottom helical angles in critical state, the proposed criterion was evaluated and modified, that is, an initial bottom helical angle less than critical value with a design margin of -13.7% could ensure the mast bottom deploying in local coil mode, and further determine a successful local coil deployment of entire coilable mast.
Tomatis, Laura; Krebs, Andreas; Siegenthaler, Jessica; Murer, Kurt; de Bruin, Eling D
2015-01-01
Health is closely linked to physical activity and fitness. It is therefore important to monitor fitness in children. Although many reports on physical tests have been published, data comparison between studies is an issue. This study reports Swiss first grade norm values of fitness tests and compares these with criterion reference data. A total of 10,565 boys (7.18 ± 0.42 years) and 10,204 girls (7.14 ± 0.41 years) were tested for standing long jump, plate tapping, 20-m shuttle run, lateral jump and 20-m sprint. Average values for six-, seven- and eight-year-olds were analysed and reference curves for age were constructed. Z-values were generated for comparisons with criterion references reported in the literature. Results were better for all disciplines in seven-year-old first grade children compared to six-year-old children (p < 0.01). Eight-year-old children did not perform better compared to seven-year-old children in the sprint run (p = 0.11), standing long jump (p > 0.99) and shuttle run (p = 0.43), whereas they were better in all other disciplines compared to their younger peers. The average performance of boys was better than girls except for tapping at the age of 8 (p = 0.06). Differences in performance due to testing protocol and setting must be considered when test values from a first grade setting are compared to criterion-based benchmarks. In a classroom setting, younger children tended to have better results and older children tended to have worse outcomes when compared to their age group criterion reference values. Norm reference data are valid allowing comparison with other data generated by similar test protocols applied in a classroom setting.
[Acoustic conditions in open plan offices - Pilot test results].
Mikulski, Witold
The main source of noise in open plan office are conversations. Office work standards in such premises are attained by applying specific acoustic adaptation. This article presents the results of pilot tests and acoustic evaluation of open space rooms. Acoustic properties of 6 open plan office rooms were the subject of the tests. Evaluation parameters, measurement methods and criterial values were adopted according to the following standards: PN-EN ISO 3382- 3:2012, PN-EN ISO 3382-2:2010, PN-B-02151-4:2015-06 and PN-B-02151-3:2015-10. The reverberation time was 0.33- 0.55 s (maximum permissible value in offices - 0.6 s; the criterion was met), sound absorption coefficient in relation to 1 m2 of the room's plan was 0.77-1.58 m2 (minimum permissible value - 1.1 m2; 2 out of 6 rooms met the criterion), distraction distance was 8.5-14 m (maximum permissible value - 5 m; none of the rooms met the criterion), A-weighted sound pressure level of speech at a distance of 4 m was 43.8-54.7 dB (maximum permissible value - 48 dB; 2 out of 6 rooms met the criterion), spatial decay rate of the speech was 1.8-6.3 dB (minimum permissible value - 7 dB; none of the rooms met the criterion). Standard acoustic treatment, containing sound absorbing suspended ceiling, sound absorbing materials on the walls, carpet flooring and sound absorbing workplace barriers, is not sufficient. These rooms require specific advanced acoustic solutions. Med Pr 2016;67(5):653-662. This work is available in Open Access model and licensed under a CC BY-NC 3.0 PL license.
Intelligibility for Binaural Speech with Discarded Low-SNR Speech Components.
Schoenmaker, Esther; van de Par, Steven
2016-01-01
Speech intelligibility in multitalker settings improves when the target speaker is spatially separated from the interfering speakers. A factor that may contribute to this improvement is the improved detectability of target-speech components due to binaural interaction in analogy to the Binaural Masking Level Difference (BMLD). This would allow listeners to hear target speech components within specific time-frequency intervals that have a negative SNR, similar to the improvement in the detectability of a tone in noise when these contain disparate interaural difference cues. To investigate whether these negative-SNR target-speech components indeed contribute to speech intelligibility, a stimulus manipulation was performed where all target components were removed when local SNRs were smaller than a certain criterion value. It can be expected that for sufficiently high criterion values target speech components will be removed that do contribute to speech intelligibility. For spatially separated speakers, assuming that a BMLD-like detection advantage contributes to intelligibility, degradation in intelligibility is expected already at criterion values below 0 dB SNR. However, for collocated speakers it is expected that higher criterion values can be applied without impairing speech intelligibility. Results show that degradation of intelligibility for separated speakers is only seen for criterion values of 0 dB and above, indicating a negligible contribution of a BMLD-like detection advantage in multitalker settings. These results show that the spatial benefit is related to a spatial separation of speech components at positive local SNRs rather than to a BMLD-like detection improvement for speech components at negative local SNRs.
Tousignant, Michel; Smeesters, Cécil; Breton, Anne-Marie; Breton, Emilie; Corriveau, Hélène
2006-04-01
This study compared range of motion (ROM) measurements using a cervical range of motion device (CROM) and an optoelectronic system (OPTOTRAK). To examine the criterion validity of the CROM for the measurement of cervical ROM on healthy adults. Whereas measurements of cervical ROM are recognized as part of the assessment of patients with neck pain, few devices are available in clinical settings. Two papers published previously showed excellent criterion validity for measurements of cervical flexion/extension and lateral flexion using the CROM. Subjects performed neck rotation, flexion/extension, and lateral flexion while sitting on a wooden chair. The ROM values were measured by the CROM as well as the OPTOTRAK. The cervical rotational ROM values using the CROM demonstrated a good to excellent linear relationship with those using the OPTOTRAK: right rotation, r = 0.89 (95% confidence interval, 0.81-0.94), and left rotation, r = 0.94 (95% confidence interval, 0.90-0.97). Similar results were also obtained for flexion/extension and lateral flexion ROM values. The CROM showed excellent criterion validity for measurements of cervical rotation. We propose using ROM values measured by the CROM as outcome measures for patients with neck pain.
A mesh gradient technique for numerical optimization
NASA Technical Reports Server (NTRS)
Willis, E. A., Jr.
1973-01-01
A class of successive-improvement optimization methods in which directions of descent are defined in the state space along each trial trajectory are considered. The given problem is first decomposed into two discrete levels by imposing mesh points. Level 1 consists of running optimal subarcs between each successive pair of mesh points. For normal systems, these optimal two-point boundary value problems can be solved by following a routine prescription if the mesh spacing is sufficiently close. A spacing criterion is given. Under appropriate conditions, the criterion value depends only on the coordinates of the mesh points, and its gradient with respect to those coordinates may be defined by interpreting the adjoint variables as partial derivatives of the criterion value function. In level 2, the gradient data is used to generate improvement steps or search directions in the state space which satisfy the boundary values and constraints of the given problem.
NASA Astrophysics Data System (ADS)
Jia, Chen; Chen, Yong
2015-05-01
In the work of Amann, Schmiedl and Seifert (2010 J. Chem. Phys. 132 041102), the authors derived a sufficient criterion to identify a non-equilibrium steady state (NESS) in a three-state Markov system based on the coarse-grained information of two-state trajectories. In this paper, we present a mathematical derivation and provide a probabilistic interpretation of the Amann-Schmiedl-Seifert (ASS) criterion. Moreover, the ASS criterion is compared with some other criterions for a NESS.
The Impact of Various Class-Distinction Features on Model Selection in the Mixture Rasch Model
ERIC Educational Resources Information Center
Choi, In-Hee; Paek, Insu; Cho, Sun-Joo
2017-01-01
The purpose of the current study is to examine the performance of four information criteria (Akaike's information criterion [AIC], corrected AIC [AICC] Bayesian information criterion [BIC], sample-size adjusted BIC [SABIC]) for detecting the correct number of latent classes in the mixture Rasch model through simulations. The simulation study…
A new self-report inventory of dyslexia for students: criterion and construct validity.
Tamboer, Peter; Vorst, Harrie C M
2015-02-01
The validity of a Dutch self-report inventory of dyslexia was ascertained in two samples of students. Six biographical questions, 20 general language statements and 56 specific language statements were based on dyslexia as a multi-dimensional deficit. Dyslexia and non-dyslexia were assessed with two criteria: identification with test results (Sample 1) and classification using biographical information (both samples). Using discriminant analyses, these criteria were predicted with various groups of statements. All together, 11 discriminant functions were used to estimate classification accuracy of the inventory. In Sample 1, 15 statements predicted the test criterion with classification accuracy of 98%, and 18 statements predicted the biographical criterion with classification accuracy of 97%. In Sample 2, 16 statements predicted the biographical criterion with classification accuracy of 94%. Estimations of positive and negative predictive value were 89% and 99%. Items of various discriminant functions were factor analysed to find characteristic difficulties of students with dyslexia, resulting in a five-factor structure in Sample 1 and a four-factor structure in Sample 2. Answer bias was investigated with measures of internal consistency reliability. Less than 20 self-report items are sufficient to accurately classify students with and without dyslexia. This supports the usefulness of self-assessment of dyslexia as a valid alternative to diagnostic test batteries. Copyright © 2015 John Wiley & Sons, Ltd.
Nøhr, Christian; Botin, Lars; Zhu, Xinxin
2017-01-01
This paper discusses how health information technologies like tele-care, tele-health and tele-medicine can improve the condition for high-need patients, specifically in relation to access. The paper addresses specifically the values of timeliness and equity and how tele technological solutions can support and enhance these values. The paper introduces to the concept of scaffolding, which constitutes the framework for dynamic, appropriate, caring and embracing approaches for engaging and involving high-need patients that are vulnerable and exposed. A number of specific considerations for designing tele-technologies for high-need patients are derived, and the paper concludes that ethical and epistemological criterions for design are needed in order to meet the needs and requirements of the weak and exposed.
The Brockport Physical Fitness Test Training Manual. [Project Target].
ERIC Educational Resources Information Center
Winnick, Joseph P.; Short, Francis X., Ed.
This training manual presents information on the Brockport Physical Fitness Test (BPFT), a criterion-referenced fitness test for children and adolescents with disabilities. The first chapter of the test training manual includes information dealing with health-related criterion-referenced testing, the interaction between physical activity and…
How to (properly) strengthen Bell's theorem using counterfactuals
NASA Astrophysics Data System (ADS)
Bigaj, Tomasz
Bell's theorem in its standard version demonstrates that the joint assumptions of the hidden-variable hypothesis and the principle of local causation lead to a conflict with quantum-mechanical predictions. In his latest counterfactual strengthening of Bell's theorem, Stapp attempts to prove that the locality assumption itself contradicts the quantum-mechanical predictions in the Hardy case. His method relies on constructing a complex, non-truth functional formula which consists of statements about measurements and outcomes in some region R, and whose truth value depends on the selection of a measurement setting in a space-like separated location L. Stapp argues that this fact shows that the information about the measurement selection made in L has to be present in R. I give detailed reasons why this conclusion can and should be resisted. Next I correct and formalize an informal argument by Shimony and Stein showing that the locality condition coupled with Einstein's criterion of reality is inconsistent with quantum-mechanical predictions. I discuss the possibility of avoiding the inconsistency by rejecting Einstein's criterion rather than the locality assumption.
Observational constraints on Hubble parameter in viscous generalized Chaplygin gas
NASA Astrophysics Data System (ADS)
Thakur, P.
2018-04-01
Cosmological model with viscous generalized Chaplygin gas (in short, VGCG) is considered here to determine observational constraints on its equation of state parameters (in short, EoS) from background data. These data consists of H(z)-z (OHD) data, Baryonic Acoustic Oscillations peak parameter, CMB shift parameter and SN Ia data (Union 2.1). Best-fit values of the EoS parameters including present Hubble parameter (H0) and their acceptable range at different confidence limits are determined. In this model the permitted range for the present Hubble parameter and the transition redshift (zt) at 1σ confidence limits are H0= 70.24^{+0.34}_{-0.36} and zt=0.76^{+0.07}_{-0.07} respectively. These EoS parameters are then compared with those of other models. Present age of the Universe (t0) have also been determined here. Akaike information criterion and Bayesian information criterion for the model selection have been adopted for comparison with other models. It is noted that VGCG model satisfactorily accommodates the present accelerating phase of the Universe.
Li, Qizhai; Hu, Jiyuan; Ding, Juan; Zheng, Gang
2014-04-01
A classical approach to combine independent test statistics is Fisher's combination of $p$-values, which follows the $\\chi ^2$ distribution. When the test statistics are dependent, the gamma distribution (GD) is commonly used for the Fisher's combination test (FCT). We propose to use two generalizations of the GD: the generalized and the exponentiated GDs. We study some properties of mis-using the GD for the FCT to combine dependent statistics when one of the two proposed distributions are true. Our results show that both generalizations have better control of type I error rates than the GD, which tends to have inflated type I error rates at more extreme tails. In practice, common model selection criteria (e.g. Akaike information criterion/Bayesian information criterion) can be used to help select a better distribution to use for the FCT. A simple strategy of the two generalizations of the GD in genome-wide association studies is discussed. Applications of the results to genetic pleiotrophic associations are described, where multiple traits are tested for association with a single marker.
Growth curves for ostriches (Struthio camelus) in a Brazilian population.
Ramos, S B; Caetano, S L; Savegnago, R P; Nunes, B N; Ramos, A A; Munari, D P
2013-01-01
The objective of this study was to fit growth curves using nonlinear and linear functions to describe the growth of ostriches in a Brazilian population. The data set consisted of 112 animals with BW measurements from hatching to 383 d of age. Two nonlinear growth functions (Gompertz and logistic) and a third-order polynomial function were applied. The parameters for the models were estimated using the least-squares method and Gauss-Newton algorithm. The goodness-of-fit of the models was assessed using R(2) and the Akaike information criterion. The R(2) calculated for the logistic growth model was 0.945 for hens and 0.928 for cockerels and for the Gompertz growth model, 0.938 for hens and 0.924 for cockerels. The third-order polynomial fit gave R(2) of 0.938 for hens and 0.924 for cockerels. Among the Akaike information criterion calculations, the logistic growth model presented the lowest values in this study, both for hens and for cockerels. Nonlinear models are more appropriate for describing the sigmoid nature of ostrich growth.
The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18
ERIC Educational Resources Information Center
Haberman, Shelby J.
2007-01-01
In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…
Pasekov, V P
2013-03-01
The paper considers the problems in the adaptive evolution of life-history traits for individuals in the nonlinear Leslie model of age-structured population. The possibility to predict adaptation results as the values of organism's traits (properties) that provide for the maximum of a certain function of traits (optimization criterion) is studied. An ideal criterion of this type is Darwinian fitness as a characteristic of success of an individual's life history. Criticism of the optimization approach is associated with the fact that it does not take into account the changes in the environmental conditions (in a broad sense) caused by evolution, thereby leading to losses in the adequacy of the criterion. In addition, the justification for this criterion under stationary conditions is not usually rigorous. It has been suggested to overcome these objections in terms of the adaptive dynamics theory using the concept of invasive fitness. The reasons are given that favor the application of the average number of offspring for an individual, R(L), as an optimization criterion in the nonlinear Leslie model. According to the theory of quantitative genetics, the selection for fertility (that is, for a set of correlated quantitative traits determined by both multiple loci and the environment) leads to an increase in R(L). In terms of adaptive dynamics, the maximum R(L) corresponds to the evolutionary stability and, in certain cases, convergent stability of the values for traits. The search for evolutionarily stable values on the background of limited resources for reproduction is a problem of linear programming.
2013-01-01
Background Genetic variation for environmental sensitivity indicates that animals are genetically different in their response to environmental factors. Environmental factors are either identifiable (e.g. temperature) and called macro-environmental or unknown and called micro-environmental. The objectives of this study were to develop a statistical method to estimate genetic parameters for macro- and micro-environmental sensitivities simultaneously, to investigate bias and precision of resulting estimates of genetic parameters and to develop and evaluate use of Akaike’s information criterion using h-likelihood to select the best fitting model. Methods We assumed that genetic variation in macro- and micro-environmental sensitivities is expressed as genetic variance in the slope of a linear reaction norm and environmental variance, respectively. A reaction norm model to estimate genetic variance for macro-environmental sensitivity was combined with a structural model for residual variance to estimate genetic variance for micro-environmental sensitivity using a double hierarchical generalized linear model in ASReml. Akaike’s information criterion was constructed as model selection criterion using approximated h-likelihood. Populations of sires with large half-sib offspring groups were simulated to investigate bias and precision of estimated genetic parameters. Results Designs with 100 sires, each with at least 100 offspring, are required to have standard deviations of estimated variances lower than 50% of the true value. When the number of offspring increased, standard deviations of estimates across replicates decreased substantially, especially for genetic variances of macro- and micro-environmental sensitivities. Standard deviations of estimated genetic correlations across replicates were quite large (between 0.1 and 0.4), especially when sires had few offspring. Practically, no bias was observed for estimates of any of the parameters. Using Akaike’s information criterion the true genetic model was selected as the best statistical model in at least 90% of 100 replicates when the number of offspring per sire was 100. Application of the model to lactation milk yield in dairy cattle showed that genetic variance for micro- and macro-environmental sensitivities existed. Conclusion The algorithm and model selection criterion presented here can contribute to better understand genetic control of macro- and micro-environmental sensitivities. Designs or datasets should have at least 100 sires each with 100 offspring. PMID:23827014
ERIC Educational Resources Information Center
Tan, Xuan; Xiang, Bihua; Dorans, Neil J.; Qu, Yanxuan
2010-01-01
The nature of the matching criterion (usually the total score) in the study of differential item functioning (DIF) has been shown to impact the accuracy of different DIF detection procedures. One of the topics related to the nature of the matching criterion is whether the studied item should be included. Although many studies exist that suggest…
Density dependence and risk of extinction in a small population of sea otters
Gerber, L.R.; Buenau, K.E.; VanBlaricom, G.
2004-01-01
Sea otters (Enhydra lutris (L.)) were hunted to extinction off the coast of Washington State early in the 20th century. A new population was established by translocations from Alaska in 1969 and 1970. The population, currently numbering at least 550 animals, A major threat to the population is the ongoing risk of majour oil spills in sea otter habitat. We apply population models to census and demographic data in order to evaluate the status of the population. We fit several density dependent models to test for density dependence and determine plausible values for the carrying capacity (K) by comparing model goodness of fit to an exponential model. Model fits were compared using Akaike Information Criterion (AIC). A significant negative relationship was found between the population growth rate and population size (r2=0.27, F=5.57, df=16, p<0.05), suggesting density dependence in Washington state sea otters. Information criterion statistics suggest that the model is the most parsimonious, followed closely by the logistic Beverton-Holt model. Values of K ranged from 612 to 759 with best-fit parameter estimates for the Beverton-Holt model including 0.26 for r and 612 for K. The latest (2001) population index count (555) puts the population at 87-92% of the estimated carrying capacity, above the suggested range for optimum sustainable population (OSP). Elasticity analysis was conducted to examine the effects of proportional changes in vital rates on the population growth rate (??). The elasticity values indicate the population is most sensitive to changes in survival rates (particularly adult survival).
Strömberg, Eric A; Nyberg, Joakim; Hooker, Andrew C
2016-12-01
With the increasing popularity of optimal design in drug development it is important to understand how the approximations and implementations of the Fisher information matrix (FIM) affect the resulting optimal designs. The aim of this work was to investigate the impact on design performance when using two common approximations to the population model and the full or block-diagonal FIM implementations for optimization of sampling points. Sampling schedules for two example experiments based on population models were optimized using the FO and FOCE approximations and the full and block-diagonal FIM implementations. The number of support points was compared between the designs for each example experiment. The performance of these designs based on simulation/estimations was investigated by computing bias of the parameters as well as through the use of an empirical D-criterion confidence interval. Simulations were performed when the design was computed with the true parameter values as well as with misspecified parameter values. The FOCE approximation and the Full FIM implementation yielded designs with more support points and less clustering of sample points than designs optimized with the FO approximation and the block-diagonal implementation. The D-criterion confidence intervals showed no performance differences between the full and block diagonal FIM optimal designs when assuming true parameter values. However, the FO approximated block-reduced FIM designs had higher bias than the other designs. When assuming parameter misspecification in the design evaluation, the FO Full FIM optimal design was superior to the FO block-diagonal FIM design in both of the examples.
Reum, J C P
2011-12-01
Three lipid correction models were evaluated for liver and white dorsal muscle from Squalus acanthias. For muscle, all three models performed well, based on the Akaike Information Criterion value corrected for small sample sizes (AIC(c) ), and predicted similar lipid corrections to δ(13) C that were up to 2.8 ‰ higher than those predicted using previously published models based on multispecies data. For liver, which possessed higher bulk C:N values compared to that of white muscle, all three models performed poorly and lipid-corrected δ(13) C values were best approximated by simply adding 5.74 ‰ to bulk δ(13) C values. © 2011 The Author. Journal of Fish Biology © 2011 The Fisheries Society of the British Isles.
Two-component gravitational instability in spiral galaxies
NASA Astrophysics Data System (ADS)
Marchuk, A. A.; Sotnikova, N. Y.
2018-04-01
We applied a criterion of gravitational instability, valid for two-component and infinitesimally thin discs, to observational data along the major axis for seven spiral galaxies of early types. Unlike most papers, the dispersion equation corresponding to the criterion was solved directly without using any approximation. The velocity dispersion of stars in the radial direction σR was limited by the range of possible values instead of a fixed value. For all galaxies, the outer regions of the disc were analysed up to R ≤ 130 arcsec. The maximal and sub-maximal disc models were used to translate surface brightness into surface density. The largest destabilizing disturbance stars can exert on a gaseous disc was estimated. It was shown that the two-component criterion differs a little from the one-fluid criterion for galaxies with a large surface gas density, but it allows to explain large-scale star formation in those regions where the gaseous disc is stable. In the galaxy NGC 1167 star formation is entirely driven by the self-gravity of the stars. A comparison is made with the conventional approximations which also include the thickness effect and with models for different sound speed cg. It is shown that values of the effective Toomre parameter correspond to the instability criterion of a two-component disc Qeff < 1.5-2.5. This result is consistent with previous theoretical and observational studies.
Roy, C; Le Bras, Y; Mangold, L; Tuchmann, C; Vasilescu, C; Saussine, C; Jacqmin, D
1996-12-01
The purpose of this study was to determine if lymph node asymmetry in small (< 1.0 cm) pelvic nodes was a significant prognostic feature in determining metastatic disease. 216 patients who presented pelvic carcinoma underwent MR imaging. They were correlated to pathological findings obtained by surgery. We considered on the axial plan the maximum diameter (MAD) of both round or oval-shaped suspicious masses. Two different cut-off values were determined: node diameter superior to 1.0 cm (criterion 1) and node diameter superior to 0.5 cm with asymmetry relative to the opposite side for nodes ranging from 0.5 cm to 1.0 cm (criterion 2). With criterion 1 MR Imaging had an accuracy of 88%, a sensitivity of 65%, a specificity of 96%, a PPV of 88% and a NPV of 88% in detection of pelvic node metastasis. By considering criterion 2, MR Imaging had an accuracy of 85%, a sensitivity of 75%, a specificity of 89%, a PPV of 71% and a NPV of 91%. Normal small asymmetric lymph nodes were present in 5.6% of cases. Asymmetry of normal or inflammatory pelvic nodes is not uncommon. It cannot be relied on to diagnose metastatic involvement in cases of small suspicious lymph nodes, especially because of its low specificity and positive predictive value.
Value and role of intensive care unit outcome prediction models in end-of-life decision making.
Barnato, Amber E; Angus, Derek C
2004-07-01
In the United States, intensive care unit (ICU) admission at the end of life is commonplace. What is the value and role of ICU mortality prediction models for informing the utility of ICU care?In this article, we review the history, statistical underpinnings,and current deployment of these models in clinical care. We conclude that the use of outcome prediction models to ration care that is unlikely to provide an expected benefit is hampered by imperfect performance, the lack of real-time availability, failure to consider functional outcomes beyond survival, and physician resistance to the use of probabilistic information when death is guaranteed by the decision it informs. Among these barriers, the most important technical deficiency is the lack of automated information systems to provide outcome predictions to decision makers, and the most important research and policy agenda is to understand and address our national ambivalence toward rationing care based on any criterion.
Territories typification technique with use of statistical models
NASA Astrophysics Data System (ADS)
Galkin, V. I.; Rastegaev, A. V.; Seredin, V. V.; Andrianov, A. V.
2018-05-01
Territories typification is required for solution of many problems. The results of geological zoning received by means of various methods do not always agree. That is why the main goal of the research given is to develop a technique of obtaining a multidimensional standard classified indicator for geological zoning. In the course of the research, the probabilistic approach was used. In order to increase the reliability of geological information classification, the authors suggest using complex multidimensional probabilistic indicator P K as a criterion of the classification. The second criterion chosen is multidimensional standard classified indicator Z. These can serve as characteristics of classification in geological-engineering zoning. Above mentioned indicators P K and Z are in good correlation. Correlation coefficient values for the entire territory regardless of structural solidity equal r = 0.95 so each indicator can be used in geological-engineering zoning. The method suggested has been tested and the schematic map of zoning has been drawn.
Kurzeja, Patrick
2016-05-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas.
Rational Approximations with Hankel-Norm Criterion
1980-01-01
REPORT TYPE ANDu DATES COVERED It) L. TITLE AND SLWUIlL Fi901 ia FUNDING NUMOIRS, RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION PE61102F i...problem is proved to be reducible to obtain a two-variable all- pass ration 1 function, interpolating a set of parametric values at specified points inside...PAGES WHICH DO NOT REPRODUCE LEGIBLY. V" C - w RATIONAL APPROXIMATIONS WITH HANKEL-NORM CRITERION* Y. Genin* Philips Research Lab. 2, avenue van
Cai, Qianqian; Turner, Brett D; Sheng, Daichao; Sloan, Scott
2018-03-01
The kinetics of fluoride sorption by calcite in the presence of metal ions (Co, Mn, Cd and Ba) have been investigated and modelled using the intra-particle diffusion (IPD), pseudo-second order (PSO), and the Hill 4 and Hill 5 kinetic models. Model comparison using the Akaike Information Criterion (AIC), the Schwarz Bayseian Information Criterion (BIC) and the Bayes Factor allows direct comparison of model results irrespective of the number of model parameters. Information Criterion results indicate "very strong" evidence that the Hill 5 model was the best fitting model for all observed data due to its ability to fit sigmoidal data, with confidence contour analysis showing the model parameters were well constrained by the data. Kinetic results were used to determine the thickness of a calcite permeable reactive barrier required to achieve up to 99.9% fluoride removal at a groundwater flow of 0.1 m.day -1 . Fluoride removal half-life (t 0.5 ) values were found to increase in the order Ba ≈ stonedust (a 99% pure natural calcite) < Cd < Co < Mn. A barrier width of 0.97 ± 0.02 m was found to be required for the fluoride/calcite (stonedust) only system when using no factor of safety, whilst in the presence of Mn and Co, the width increased to 2.76 ± 0.28 and 19.83 ± 0.37 m respectively. In comparison, the PSO model predicted a required barrier thickness of ∼46.0, 62.6 & 50.3 m respectively for the fluoride/calcite, Mn and Co systems under the same conditions. Crown Copyright © 2017. Published by Elsevier Ltd. All rights reserved.
Information Centralization of Organization Information Structures via Reports of Exceptions.
ERIC Educational Resources Information Center
Moskowitz, Herbert; Murnighan, John Keith
A team theoretic model that establishes a criterion (decision rule) for a financial institution branch to report exceptional loan requests to headquarters for action was compared to such choices made by graduate industrial management students acting as financial vice-presidents. Results showed that the loan size criterion specified by subjects was…
The cross-validated AUC for MCP-logistic regression with high-dimensional data.
Jiang, Dingfeng; Huang, Jian; Zhang, Ying
2013-10-01
We propose a cross-validated area under the receiving operator characteristic (ROC) curve (CV-AUC) criterion for tuning parameter selection for penalized methods in sparse, high-dimensional logistic regression models. We use this criterion in combination with the minimax concave penalty (MCP) method for variable selection. The CV-AUC criterion is specifically designed for optimizing the classification performance for binary outcome data. To implement the proposed approach, we derive an efficient coordinate descent algorithm to compute the MCP-logistic regression solution surface. Simulation studies are conducted to evaluate the finite sample performance of the proposed method and its comparison with the existing methods including the Akaike information criterion (AIC), Bayesian information criterion (BIC) or Extended BIC (EBIC). The model selected based on the CV-AUC criterion tends to have a larger predictive AUC and smaller classification error than those with tuning parameters selected using the AIC, BIC or EBIC. We illustrate the application of the MCP-logistic regression with the CV-AUC criterion on three microarray datasets from the studies that attempt to identify genes related to cancers. Our simulation studies and data examples demonstrate that the CV-AUC is an attractive method for tuning parameter selection for penalized methods in high-dimensional logistic regression models.
Use of reflectance spectroscopy for early detection of calcium deficiency in plants
NASA Astrophysics Data System (ADS)
Li, Bingqing; Wah, Liew Oi; Asundi, Anand K.
2005-04-01
This article investigates calcium deficiency symptoms of the plants grown under hydroponics conditions. Leaf reflectance data were collected from plants, and then transformed to L*, a*, b* values, which provide color information of the leaves. After comparing the color information of deficient plants to control plants, a set of deficiency criterion was established for early detection of calcium deficiency in the plants. Calcium deficiency could be detected as early as two days from the onset of stress in mature plants when optical data were collected from terminal young leaves. Young plants subjected to calcium stress for 9 days could not be distinguished from nutrient sufficient plants.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2012-04-01
The results of the probability forecast methods of summer storm and hazard wind over territories of Russia and Europe are submitted at this paper. These methods use the hydrodynamic-statistical model of these phenomena. The statistical model was developed for the recognition of the situation involving these phenomena. For this perhaps the samples of the values of atmospheric parameters (n=40) for the presence and for the absence of these phenomena of storm and hazard wind were accumulated. The compressing of the predictors space without the information losses was obtained by special algorithm (k=7< 24m/s, the values of 75% 29m/s or the area of the tornado and strong squalls. The evaluation of this probability forecast was provided by criterion of Brayer. The estimation was successful and was equal for the European part of Russia B=0,37. The application of the probability forecast of storm and hazard winds allows to mitigate the economic losses when the errors of the first and second kinds of storm wind categorical forecast are not so small. A lot of examples of the storm wind probability forecast are submitted at this report.
Hall, William
2017-01-14
As healthcare resources become increasingly scarce due to growing demand and stagnating budgets, the need for effective priority setting and resource allocation will become ever more critical to providing sustainable care to patients. While societal values should certainly play a part in guiding these processes, the methodology used to capture these values need not necessarily be limited to multi-criterion decision analysis (MCDA)-based processes including 'evidence-informed deliberative processes.' However, if decision-makers intend to not only incorporates the values of the public they serve into decisions but have the decisions enacted as well, consideration should be given to more direct involvement of stakeholders. Based on the examples provided by Baltussen et al, MCDA-based processes like 'evidence-informed deliberative processes' could be one way of achieving this laudable goal. © 2017 The Author(s); Published by Kerman University of Medical Sciences. This is an open-access article distributed under the terms of the Creative Commons Attribution License (http://creativecommons.org/licenses/by/4.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
An optimal strategy for functional mapping of dynamic trait loci.
Jin, Tianbo; Li, Jiahan; Guo, Ying; Zhou, Xiaojing; Yang, Runqing; Wu, Rongling
2010-02-01
As an emerging powerful approach for mapping quantitative trait loci (QTLs) responsible for dynamic traits, functional mapping models the time-dependent mean vector with biologically meaningful equations and are likely to generate biologically relevant and interpretable results. Given the autocorrelation nature of a dynamic trait, functional mapping needs the implementation of the models for the structure of the covariance matrix. In this article, we have provided a comprehensive set of approaches for modelling the covariance structure and incorporated each of these approaches into the framework of functional mapping. The Bayesian information criterion (BIC) values are used as a model selection criterion to choose the optimal combination of the submodels for the mean vector and covariance structure. In an example for leaf age growth from a rice molecular genetic project, the best submodel combination was found between the Gaussian model for the correlation structure, power equation of order 1 for the variance and the power curve for the mean vector. Under this combination, several significant QTLs for leaf age growth trajectories were detected on different chromosomes. Our model can be well used to study the genetic architecture of dynamic traits of agricultural values.
Yu, Fang; Chen, Ming-Hui; Kuo, Lynn; Talbott, Heather; Davis, John S
2015-08-07
Recently, the Bayesian method becomes more popular for analyzing high dimensional gene expression data as it allows us to borrow information across different genes and provides powerful estimators for evaluating gene expression levels. It is crucial to develop a simple but efficient gene selection algorithm for detecting differentially expressed (DE) genes based on the Bayesian estimators. In this paper, by extending the two-criterion idea of Chen et al. (Chen M-H, Ibrahim JG, Chi Y-Y. A new class of mixture models for differential gene expression in DNA microarray data. J Stat Plan Inference. 2008;138:387-404), we propose two new gene selection algorithms for general Bayesian models and name these new methods as the confident difference criterion methods. One is based on the standardized differences between two mean expression values among genes; the other adds the differences between two variances to it. The proposed confident difference criterion methods first evaluate the posterior probability of a gene having different gene expressions between competitive samples and then declare a gene to be DE if the posterior probability is large. The theoretical connection between the proposed first method based on the means and the Bayes factor approach proposed by Yu et al. (Yu F, Chen M-H, Kuo L. Detecting differentially expressed genes using alibrated Bayes factors. Statistica Sinica. 2008;18:783-802) is established under the normal-normal-model with equal variances between two samples. The empirical performance of the proposed methods is examined and compared to those of several existing methods via several simulations. The results from these simulation studies show that the proposed confident difference criterion methods outperform the existing methods when comparing gene expressions across different conditions for both microarray studies and sequence-based high-throughput studies. A real dataset is used to further demonstrate the proposed methodology. In the real data application, the confident difference criterion methods successfully identified more clinically important DE genes than the other methods. The confident difference criterion method proposed in this paper provides a new efficient approach for both microarray studies and sequence-based high-throughput studies to identify differentially expressed genes.
2016-01-01
Modern imaging techniques, increased simulation capabilities and extended theoretical frameworks, naturally drive the development of multiscale modelling by the question: which new information should be considered? Given the need for concise constitutive relationships and efficient data evaluation; however, one important question is often neglected: which information is sufficient? For this reason, this work introduces the formalized criterion of subscale sufficiency. This criterion states whether a chosen constitutive relationship transfers all necessary information from micro to macroscale within a multiscale framework. It further provides a scheme to improve constitutive relationships. Direct application to static capillary pressure demonstrates usefulness and conditions for subscale sufficiency of saturation and interfacial areas. PMID:27279769
Mebane, Christopher A.
2006-01-01
In 2001, the U.S. Environmental Protection Agency (EPA) released updated aquatic life criteria for cadmium. Since then, additional data on the effects of cadmium to aquatic life have become available from studies supported by the EPA, Idaho Department of Environmental Quality (IDEQ), and the U.S. Geological Survey, among other sources. Updated data on the effects of cadmium to aquatic life were compiled and reviewed and low-effect concentrations were estimated. Low-effect values were calculated using EPA's guidelines for deriving numerical national water-quality criteria for the protection of aquatic organisms and their uses. Data on the short-term (acute) effects of cadmium on North American freshwater species that were suitable for criteria derivation were located for 69 species representing 57 genera and 33 families. For longer-term (chronic) effects of cadmium on North American freshwater species, suitable data were located for 28 species representing 21 genera and 17 families. Both the acute and chronic toxicity of cadmium were dependent on the hardness of the test water. Hardness-toxicity regressions were developed for both acute and chronic datasets so that effects data from different tests could be adjusted to a common water hardness. Hardness-adjusted effects values were pooled to obtain species and genus mean acute and chronic values, which then were ranked by their sensitivity to cadmium. The four most sensitive genera to acute exposures were, in order of increasing cadmium resistance, Oncorhynchus (Pacific trout and salmon), Salvelinus ('char' trout), Salmo (Atlantic trout and salmon), and Cottus (sculpin). The four most sensitive genera to chronic exposures were Hyalella (amphipod), Cottus, Gammarus (amphipod), and Salvelinus. Using the updated datasets, hardness dependent criteria equations were calculated for acute and chronic exposures to cadmium. At a hardness of 50 mg/L as calcium carbonate, the criterion maximum concentration (CMC, or 'acute' criterion) was calculated as 0.75 mug/L cadmium using the hardness-dependent equation CMC = e(0.8403 ? ln(hardness)-3.572) where the 'ln hardness' is the natural logarithm of the water hardness. Likewise, the criterion continuous concentration (CCC, or 'chronic' criterion) was calculated as 0.37 mug/L cadmium using the hardness-dependent equation CCC = (e(0.6247 ? ln(hardness)-3.384)) ? (1.101672 - ((ln hardness) ? 0.041838))). Using data that were independent of those used to derive the criteria, the criteria concentrations were evaluated to estimate whether adverse effects were expected to the biological integrity of natural waters or to selected species listed as threatened or endangered. One species was identified that would not be fully protected by the derived CCC, the amphipod Hyalella azteca. Exposure to CCC conditions likely would lead to population decreases in Hyalella azteca, the food web consequences of which probably would be slight if macroinvertebrate communities were otherwise diverse. Some data also suggested adverse behavioral changes are possible in fish following long-term exposures to low levels of cadmium, particularly in char (genus Salvelinus). Although ambiguous, these data indicate a need to periodically review the literature on behavioral changes in fish following metals exposure as more information becomes available. Most data reviewed indicated that criteria conditions were unlikely to contribute to overt adverse effects to either biological integrity or listed species. If elevated cadmium concentrations that approach the chronic criterion values occur in ambient waters, careful biological monitoring of invertebrate and fish assemblages would be prudent to validate the prediction that the assemblages would not be adversely affected by cadmium at criterion concentrations.
Follow Your Heart: How Is Willingness to Pay Formed under Multiple Anchors?
Lin, Chien-Huang; Chen, Ming
2017-01-01
In sales, a common promotional tactic is to supplement a required purchase (i.e., a focal product) by offering a free or discounted product (i.e., a supplementary product). The present research examines the contextual factors driving consumer evaluations of the supplementary product after the promotion has been terminated. Two experiments are used to demonstrate that consumers use multiple anchors to determine the value of a supplementary product. Consumers use other types of price information, such as the internal reference price (IRP), promotional price, and original price of the supplementary product, as anchors to adjust their willingness to pay. Among the multiple anchors, the consumer’s IRP is not only the crucial anchor to estimate the willingness to pay but also the criterion to determine whether other price information can serve as anchors. Price information, such as the promotional and original price of the supplementary product, which is higher (lower) than the IRP, will increase (decrease) the willingness to pay. However, these anchors are only employed when the price information is considered to be plausible. Assimilation and contrast effects occur when the IRP is used by consumers as a criterion to judge the reasonableness of other anchors. When the external price information belongs (does not belong) to consumers’ distribution of IRP, assimilation (contrast) effects occur, and consumers will regard the external reference price (ERP) to be a plausible (implausible) price. Limitations and future avenues for research are also discussed. PMID:29312098
Follow Your Heart: How Is Willingness to Pay Formed under Multiple Anchors?
Lin, Chien-Huang; Chen, Ming
2017-01-01
In sales, a common promotional tactic is to supplement a required purchase (i.e., a focal product) by offering a free or discounted product (i.e., a supplementary product). The present research examines the contextual factors driving consumer evaluations of the supplementary product after the promotion has been terminated. Two experiments are used to demonstrate that consumers use multiple anchors to determine the value of a supplementary product. Consumers use other types of price information, such as the internal reference price (IRP), promotional price, and original price of the supplementary product, as anchors to adjust their willingness to pay. Among the multiple anchors, the consumer's IRP is not only the crucial anchor to estimate the willingness to pay but also the criterion to determine whether other price information can serve as anchors. Price information, such as the promotional and original price of the supplementary product, which is higher (lower) than the IRP, will increase (decrease) the willingness to pay. However, these anchors are only employed when the price information is considered to be plausible. Assimilation and contrast effects occur when the IRP is used by consumers as a criterion to judge the reasonableness of other anchors. When the external price information belongs (does not belong) to consumers' distribution of IRP, assimilation (contrast) effects occur, and consumers will regard the external reference price (ERP) to be a plausible (implausible) price. Limitations and future avenues for research are also discussed.
Tateishi, Seiichiro; Watase, Mariko; Fujino, Yoshihisa; Mori, Koji
2016-01-01
In Japan, employee fitness for work is determined by annual medical examinations. It may be possible to reduce the variability in the results of work fitness determination, particularly for situation, if there is consensus among experts regarding consideration of limitation of work by means of a single parameter. Consensus building was attempted among 104 occupational physicians by employing a 3-round Delphi method. Among the medical examination parameters for which at least 50% of participants agreed in the 3rd round of the survey that the parameter would independently merit consideration for limitation of work, the values of the parameters proposed as criterion values that trigger consideration of limitation of work were sought. Parameters, along with their most frequently proposed criterion values, were defined in the study group meeting as parameters for which consensus was reached. Consensus was obtained for 8 parameters: systolic blood pressure 180 mmHg (86.6%), diastolic blood pressure 110 mmHg (85.9%), postprandial plasma glucose 300 mg/dl (76.9%), fasting plasma glucose 200 mg/dl (69.1%), Cre 2.0mg/dl (67.2%), HbA1c (JDS) 10% (62.3%), ALT 200 U/l (61.6%), and Hb 8 g/l (58.5%). To support physicians who give advice to employers about work-related measures based on the results of general medical examinations of employees, expert consensus information was obtained that can serve as background material for making judgements. It is expected that the use of this information will facilitate the ability to take appropriate measures after medical examination of employees.
N'gattia, A K; Coulibaly, D; Nzussouo, N Talla; Kadjo, H A; Chérif, D; Traoré, Y; Kouakou, B K; Kouassi, P D; Ekra, K D; Dagnan, N S; Williams, T; Tiembré, I
2016-09-13
In temperate regions, influenza epidemics occur in the winter and correlate with certain climatological parameters. In African tropical regions, the effects of climatological parameters on influenza epidemics are not well defined. This study aims to identify and model the effects of climatological parameters on seasonal influenza activity in Abidjan, Cote d'Ivoire. We studied the effects of weekly rainfall, humidity, and temperature on laboratory-confirmed influenza cases in Abidjan from 2007 to 2010. We used the Box-Jenkins method with the autoregressive integrated moving average (ARIMA) process to create models using data from 2007-2010 and to assess the predictive value of best model on data from 2011 to 2012. The weekly number of influenza cases showed significant cross-correlation with certain prior weeks for both rainfall, and relative humidity. The best fitting multivariate model (ARIMAX (2,0,0) _RF) included the number of influenza cases during 1-week and 2-weeks prior, and the rainfall during the current week and 5-weeks prior. The performance of this model showed an increase of >3 % for Akaike Information Criterion (AIC) and 2.5 % for Bayesian Information Criterion (BIC) compared to the reference univariate ARIMA (2,0,0). The prediction of the weekly number of influenza cases during 2011-2012 with the best fitting multivariate model (ARIMAX (2,0,0) _RF), showed that the observed values were within the 95 % confidence interval of the predicted values during 97 of 104 weeks. Including rainfall increases the performances of fitted and predicted models. The timing of influenza in Abidjan can be partially explained by rainfall influence, in a setting with little change in temperature throughout the year. These findings can help clinicians to anticipate influenza cases during the rainy season by implementing preventive measures.
Fatigue and Fracture-Toughness Characterization of SAW and SMA A537 Class I Ship-Steel Weldments.
1981-12-01
Charpy criterion and proposed NDT-DT criterion of Rolfe . Recommendations are made and further research is suggested to help clarify the assessment of...acceptable performance at -60aF. Likewise, at -60OF the NDT and DT data for these weldments marginally exceed the criteria proposed by Rolfe when the...exceed the CVN values equivalent to the 5/8 DT values required by Rolfe . The 5/8-inch dynamic-tear specimen is not recommended as a quality-control test
NASA Astrophysics Data System (ADS)
Rashidi Moghaddam, M.; Ayatollahi, M. R.; Berto, F.
2018-01-01
The values of mode II fracture toughness reported in the literature for several rocks are studied theoretically by using a modified criterion based on strain energy density averaged over a control volume around the crack tip. The modified criterion takes into account the effect of T-stress in addition to the singular terms of stresses/strains. The experimental results are related to mode II fracture tests performed on the semicircular bend and Brazilian disk specimens. There are good agreements between theoretical predictions using the generalized averaged strain energy density criterion and the experimental results. The theoretical results reveal that the value of mode II fracture toughness is affected by the size of control volume around the crack tip and also the magnitude and sign of T-stress.
Newgard, Craig D; Kampp, Michael; Nelson, Maria; Holmes, James F; Zive, Dana; Rea, Thomas; Bulger, Eileen M; Liao, Michael; Sherck, John; Hsia, Renee Y; Wang, N Ewen; Fleischman, Ross J; Barton, Erik D; Daya, Mohamud; Heineman, John; Kuppermann, Nathan
2012-05-01
"Emergency medical services (EMS) provider judgment" was recently added as a field triage criterion to the national guidelines, yet its predictive value and real world application remain unclear. We examine the use and independent predictive value of EMS provider judgment in identifying seriously injured persons. We analyzed a population-based retrospective cohort, supplemented by qualitative analysis, of injured children and adults evaluated and transported by 47 EMS agencies to 94 hospitals in five regions across the Western United States from 2006 to 2008. We used logistic regression models to evaluate the independent predictive value of EMS provider judgment for Injury Severity Score ≥ 16. EMS narratives were analyzed using qualitative methods to assess and compare common themes for each step in the triage algorithm, plus EMS provider judgment. 213,869 injured patients were evaluated and transported by EMS over the 3-year period, of whom 41,191 (19.3%) met at least one of the field triage criteria. EMS provider judgment was the most commonly used triage criterion (40.0% of all triage-positive patients; sole criterion in 21.4%). After accounting for other triage criteria and confounders, the adjusted odds ratio of Injury Severity Score ≥ 16 for EMS provider judgment was 1.23 (95% confidence interval, 1.03-1.47), although there was variability in predictive value across sites. Patients meeting EMS provider judgment had concerning clinical presentations qualitatively similar to those meeting mechanistic and other special considerations criteria. Among this multisite cohort of trauma patients, EMS provider judgment was the most commonly used field trauma triage criterion, independently associated with serious injury, and useful in identifying high-risk patients missed by other criteria. However, there was variability in predictive value between sites.
Link, William; Sauer, John R.
2016-01-01
The analysis of ecological data has changed in two important ways over the last 15 years. The development and easy availability of Bayesian computational methods has allowed and encouraged the fitting of complex hierarchical models. At the same time, there has been increasing emphasis on acknowledging and accounting for model uncertainty. Unfortunately, the ability to fit complex models has outstripped the development of tools for model selection and model evaluation: familiar model selection tools such as Akaike's information criterion and the deviance information criterion are widely known to be inadequate for hierarchical models. In addition, little attention has been paid to the evaluation of model adequacy in context of hierarchical modeling, i.e., to the evaluation of fit for a single model. In this paper, we describe Bayesian cross-validation, which provides tools for model selection and evaluation. We describe the Bayesian predictive information criterion and a Bayesian approximation to the BPIC known as the Watanabe-Akaike information criterion. We illustrate the use of these tools for model selection, and the use of Bayesian cross-validation as a tool for model evaluation, using three large data sets from the North American Breeding Bird Survey.
Gomez-Lazaro, Emilio; Bueso, Maria C.; Kessler, Mathieu; ...
2016-02-02
Here, the Weibull probability distribution has been widely applied to characterize wind speeds for wind energy resources. Wind power generation modeling is different, however, due in particular to power curve limitations, wind turbine control methods, and transmission system operation requirements. These differences are even greater for aggregated wind power generation in power systems with high wind penetration. Consequently, models based on one-Weibull component can provide poor characterizations for aggregated wind power generation. With this aim, the present paper focuses on discussing Weibull mixtures to characterize the probability density function (PDF) for aggregated wind power generation. PDFs of wind power datamore » are firstly classified attending to hourly and seasonal patterns. The selection of the number of components in the mixture is analyzed through two well-known different criteria: the Akaike information criterion (AIC) and the Bayesian information criterion (BIC). Finally, the optimal number of Weibull components for maximum likelihood is explored for the defined patterns, including the estimated weight, scale, and shape parameters. Results show that multi-Weibull models are more suitable to characterize aggregated wind power data due to the impact of distributed generation, variety of wind speed values and wind power curtailment.« less
Roberts, Steven; Martin, Michael A
2006-12-15
The shape of the dose-response relation between particulate matter air pollution and mortality is crucial for public health assessment, and departures of this relation from linearity could have important regulatory consequences. A number of investigators have studied the shape of the particulate matter-mortality dose-response relation and concluded that the relation could be adequately described by a linear model. Some of these researchers examined the hypothesis of linearity by comparing Akaike's Information Criterion (AIC) values obtained under linear, piecewise linear, and spline alternative models. However, at the current time, the efficacy of the AIC in this context has not been assessed. The authors investigated AIC as a means of comparing competing dose-response models, using data from Cook County, Illinois, for the period 1987-2000. They found that if nonlinearities exist, the AIC is not always successful in detecting them. In a number of the scenarios considered, AIC was equivocal, picking the correct simulated dose-response model about half of the time. These findings suggest that further research into the shape of the dose-response relation using alternative model selection criteria may be warranted.
Han, Sanghoon; Dobbins, Ian G.
2009-01-01
Recognition models often assume that subjects use specific evidence values (decision criteria) to adaptively parse continuous memory evidence into response categories (e.g., “old” or “new”). Although explicit pre-test instructions influence criterion placement, these criteria appear extremely resistant to change once testing begins. We tested criterion sensitivity to local feedback using a novel, biased feedback technique designed to tacitly encourage certain errors by indicating they were correct choices. Experiment 1 demonstrated that fully correct feedback had little effect on criterion placement, whereas biased feedback during Experiments 2 and 3 yielded prominent, durable, and adaptive criterion shifts, with observers reporting they were unaware of the manipulation in Experiment 3. These data suggest recognition criteria can be easily modified during testing through a form of feedback learning that operates independent of stimulus characteristics and observer awareness of the nature of the manipulation. This mechanism may be fundamentally different than criterion shifts following explicit instructions and warnings, or shifts linked to manipulations of stimulus characteristics combined with feedback highlighting those manipulations. PMID:18604954
Validation of the peak bilirubin criterion for outcome after partial hepatectomy.
van Mierlo, Kim M C; Lodewick, Toine M; Dhar, Dipok K; van Woerden, Victor; Kurstjens, Ralph; Schaap, Frank G; van Dam, Ronald M; Vyas, Soumil; Malagó, Massimo; Dejong, Cornelis H C; Olde Damink, Steven W M
2016-10-01
Postoperative liver failure (PLF) is a dreaded complication after partial hepatectomy. The peak bilirubin criterion (>7.0 mg/dL or ≥120 μmol/L) is used to define PLF. This study aimed to validate the peak bilirubin criterion as postoperative risk indicator for 90-day liver-related mortality. Characteristics of 956 consecutive patients who underwent partial hepatectomy at the Maastricht University Medical Centre or Royal Free London between 2005 and 2012 were analyzed by uni- and multivariable analyses with odds ratios (OR) and 95% confidence intervals (95%CI). Thirty-five patients (3.7%) met the postoperative peak bilirubin criterion at median day 19 with a median bilirubin level of 183 [121-588] μmol/L. Sensitivity and specificity for liver-related mortality after major hepatectomy were 41.2% and 94.6%, respectively. The positive predictive value was 22.6%. Predictors of liver-related mortality were the peak bilirubin criterion (p < 0.001, OR = 15.9 [95%CI 5.2-48.7]), moderate-severe steatosis and fibrosis (p = 0.013, OR = 8.5 [95%CI 1.6-46.6]), ASA 3-4 (p = 0.047, OR = 3.0 [95%CI 1.0-8.8]) and age (p = 0.044, OR = 1.1 [95%CI 1.0-1.1]). The peak bilirubin criterion has a low sensitivity and positive predictive value for 90-day liver-related mortality after major hepatectomy. Copyright © 2016 International Hepato-Pancreato-Biliary Association Inc. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Tong, Hengmao
2012-03-01
Zheng et al (Zheng and Wang, 2004; Zheng et al., 2011) proposed a new mechanism for ductile formation which is related to effective moment instead of shear stress, and the deformation zone develops along plane of maximum effective moment. The mathematical expression of maximum effective moment (The criterion of maximum effective moment, simplified as MEM criterion, Zheng and Wang, 2004; Zheng et al., 2011) is that Meff = 0.5 (σ1 - σ3) L sin2αsinα, where σ1 - σ3 is the yield strength of a material or rock, L is the unit length (of cleavage) in the σ1 direction, and α is the angle between σ1 and a certain plane. The effective moment reaches its maximum value when α is ±54.7° and deformation zones tend to appear in pairs with a conjugate angle of 2α, 109.4° facing to σ1. There is no remarkable Meff drop from the maximum values within the range of 54.7°±10°, where is favorable for the formation of ductile deformation zone. As a result, the origin of low-angle normal faults, high-angle reverse faults and certain types of conjugate strike-slip faults, which are incompatible with Mohr-Coulomb criterion, can be reasonably explained with MEM criterion (Zheng et al., 2011). Further more, lots of natural and experimental cases were found or collected to support the criterion.
Wiggins, Paul A
2015-07-21
This article describes the application of a change-point algorithm to the analysis of stochastic signals in biological systems whose underlying state dynamics consist of transitions between discrete states. Applications of this analysis include molecular-motor stepping, fluorophore bleaching, electrophysiology, particle and cell tracking, detection of copy number variation by sequencing, tethered-particle motion, etc. We present a unified approach to the analysis of processes whose noise can be modeled by Gaussian, Wiener, or Ornstein-Uhlenbeck processes. To fit the model, we exploit explicit, closed-form algebraic expressions for maximum-likelihood estimators of model parameters and estimated information loss of the generalized noise model, which can be computed extremely efficiently. We implement change-point detection using the frequentist information criterion (which, to our knowledge, is a new information criterion). The frequentist information criterion specifies a single, information-based statistical test that is free from ad hoc parameters and requires no prior probability distribution. We demonstrate this information-based approach in the analysis of simulated and experimental tethered-particle-motion data. Copyright © 2015 Biophysical Society. Published by Elsevier Inc. All rights reserved.
Model selection for multi-component frailty models.
Ha, Il Do; Lee, Youngjo; MacKenzie, Gilbert
2007-11-20
Various frailty models have been developed and are now widely used for analysing multivariate survival data. It is therefore important to develop an information criterion for model selection. However, in frailty models there are several alternative ways of forming a criterion and the particular criterion chosen may not be uniformly best. In this paper, we study an Akaike information criterion (AIC) on selecting a frailty structure from a set of (possibly) non-nested frailty models. We propose two new AIC criteria, based on a conditional likelihood and an extended restricted likelihood (ERL) given by Lee and Nelder (J. R. Statist. Soc. B 1996; 58:619-678). We compare their performance using well-known practical examples and demonstrate that the two criteria may yield rather different results. A simulation study shows that the AIC based on the ERL is recommended, when attention is focussed on selecting the frailty structure rather than the fixed effects.
Secret Sharing of a Quantum State.
Lu, He; Zhang, Zhen; Chen, Luo-Kan; Li, Zheng-Da; Liu, Chang; Li, Li; Liu, Nai-Le; Ma, Xiongfeng; Chen, Yu-Ao; Pan, Jian-Wei
2016-07-15
Secret sharing of a quantum state, or quantum secret sharing, in which a dealer wants to share a certain amount of quantum information with a few players, has wide applications in quantum information. The critical criterion in a threshold secret sharing scheme is confidentiality: with less than the designated number of players, no information can be recovered. Furthermore, in a quantum scenario, one additional critical criterion exists: the capability of sharing entangled and unknown quantum information. Here, by employing a six-photon entangled state, we demonstrate a quantum threshold scheme, where the shared quantum secrecy can be efficiently reconstructed with a state fidelity as high as 93%. By observing that any one or two parties cannot recover the secrecy, we show that our scheme meets the confidentiality criterion. Meanwhile, we also demonstrate that entangled quantum information can be shared and recovered via our setting, which shows that our implemented scheme is fully quantum. Moreover, our experimental setup can be treated as a decoding circuit of the five-qubit quantum error-correcting code with two erasure errors.
Method for Automatic Selection of Parameters in Normal Tissue Complication Probability Modeling.
Christophides, Damianos; Appelt, Ane L; Gusnanto, Arief; Lilley, John; Sebag-Montefiore, David
2018-07-01
To present a fully automatic method to generate multiparameter normal tissue complication probability (NTCP) models and compare its results with those of a published model, using the same patient cohort. Data were analyzed from 345 rectal cancer patients treated with external radiation therapy to predict the risk of patients developing grade 1 or ≥2 cystitis. In total, 23 clinical factors were included in the analysis as candidate predictors of cystitis. Principal component analysis was used to decompose the bladder dose-volume histogram into 8 principal components, explaining more than 95% of the variance. The data set of clinical factors and principal components was divided into training (70%) and test (30%) data sets, with the training data set used by the algorithm to compute an NTCP model. The first step of the algorithm was to obtain a bootstrap sample, followed by multicollinearity reduction using the variance inflation factor and genetic algorithm optimization to determine an ordinal logistic regression model that minimizes the Bayesian information criterion. The process was repeated 100 times, and the model with the minimum Bayesian information criterion was recorded on each iteration. The most frequent model was selected as the final "automatically generated model" (AGM). The published model and AGM were fitted on the training data sets, and the risk of cystitis was calculated. The 2 models had no significant differences in predictive performance, both for the training and test data sets (P value > .05) and found similar clinical and dosimetric factors as predictors. Both models exhibited good explanatory performance on the training data set (P values > .44), which was reduced on the test data sets (P values < .05). The predictive value of the AGM is equivalent to that of the expert-derived published model. It demonstrates potential in saving time, tackling problems with a large number of parameters, and standardizing variable selection in NTCP modeling. Crown Copyright © 2018. Published by Elsevier Inc. All rights reserved.
NASA Technical Reports Server (NTRS)
Kubala, A.; Black, D.; Szebehely, V.
1993-01-01
A comparison is made between the stability criteria of Hill and that of Laplace to determine the stability of outer planetary orbits encircling binary stars. The restricted, analytically determined results of Hill's method by Szebehely and coworkers and the general, numerically integrated results of Laplace's method by Graziani and Black (1981) are compared for varying values of the mass parameter mu. For mu = 0 to 0.15, the closest orbit (lower limit of radius) an outer planet in a binary system can have and still remain stable is determined by Hill's stability criterion. For mu greater than 0.15, the critical radius is determined by Laplace's stability criterion. It appears that the Graziani-Black stability criterion describes the critical orbit within a few percent for all values of mu.
Criteria for determining a basic health services package. Recent developments in The Netherlands.
Stolk, E A; Poley, M J
2005-03-01
The criterion of medical need figures prominently in the Dutch model for reimbursement decisions as well as in many international models for health care priority setting. Nevertheless the conception of need remains too vague and general to be applied successfully in priority decisions. This contribution explores what is wrong with the proposed definitions of medical need and identifies features in the decision-making process that inhibit implementation and usefulness of this criterion. In contrast to what is commonly assumed, the problem is not so much a failure to understand the nature of the medical need criterion and the value judgments involved. Instead the problem seems to be a mismatch between the information regarding medical need and the way in which these concerns are incorporated into policy models. Criteria--medical need, as well as other criteria such as effectiveness and cost-effectiveness--are usually perceived as "hurdles," and each intervention can pass or fail assessment on the basis of each criterion and therefore be included or excluded from public funding. These models fail to understand that choices are not so much between effective and ineffective treatments, or necessary and unnecessary ones. Rather, choices are often between interventions that are somewhat effective and/or needed. Evaluation of such services requires a holistic approach and not a sequence of fail or pass judgments. To improve applicability of criteria that pertain to medical need we therefore suggest further development of these criteria beyond their original binary meaning and propose meaningful ways in which these criteria can be integrated into policy decisions.
Laurenson, Yan C S M; Kyriazakis, Ilias; Bishop, Stephen C
2013-10-18
Estimated breeding values (EBV) for faecal egg count (FEC) and genetic markers for host resistance to nematodes may be used to identify resistant animals for selective breeding programmes. Similarly, targeted selective treatment (TST) requires the ability to identify the animals that will benefit most from anthelmintic treatment. A mathematical model was used to combine the concepts and evaluate the potential of using genetic-based methods to identify animals for a TST regime. EBVs obtained by genomic prediction were predicted to be the best determinant criterion for TST in terms of the impact on average empty body weight and average FEC, whereas pedigree-based EBVs for FEC were predicted to be marginally worse than using phenotypic FEC as a determinant criterion. Whilst each method has financial implications, if the identification of host resistance is incorporated into a wider genomic selection indices or selective breeding programmes, then genetic or genomic information may be plausibly included in TST regimes. Copyright © 2013 Elsevier B.V. All rights reserved.
Comparison of Nurse Staffing Measurements in Staffing-Outcomes Research.
Park, Shin Hye; Blegen, Mary A; Spetz, Joanne; Chapman, Susan A; De Groot, Holly A
2015-01-01
Investigators have used a variety of operational definitions of nursing hours of care in measuring nurse staffing for health services research. However, little is known about which approach is best for nurse staffing measurement. To examine whether various nursing hours measures yield different model estimations when predicting patient outcomes and to determine the best method to measure nurse staffing based on the model estimations. We analyzed data from the University HealthSystem Consortium for 2005. The sample comprised 208 hospital-quarter observations from 54 hospitals, representing information on 971 adult-care units and about 1 million inpatient discharges. We compared regression models using different combinations of staffing measures based on productive/nonproductive and direct-care/indirect-care hours. Akaike Information Criterion and Bayesian Information Criterion were used in the assessment of staffing measure performance. The models that included the staffing measure calculated from productive hours by direct-care providers were best, in general. However, the Akaike Information Criterion and Bayesian Information Criterion differences between models were small, indicating that distinguishing nonproductive and indirect-care hours from productive direct-care hours does not substantially affect the approximation of the relationship between nurse staffing and patient outcomes. This study is the first to explicitly evaluate various measures of nurse staffing. Productive hours by direct-care providers are the strongest measure related to patient outcomes and thus should be preferred in research on nurse staffing and patient outcomes.
Shoji, T; Sakurai, Y; Chihara, E; Nishikawa, S; Omae, K
2009-06-01
To better understand the reference values and adequate discrimination values of colour vision function with described quantitative systems for the Lanthony desaturated D-15 panel (D-15DS). A total of 1042 Japanese male officials were interviewed and underwent testing using Ishihara pseudoisochromatic plates, standard pseudoisochromatic plates part 2, and the D-15DS. The Farnsworth-Munsell (F-M) 100-hue test and the criteria of Verriest et al were used as definitive tests. Outcomes of the D-15DS were calculated using Bowman's Colour Confusion Index (CCI). The study design included two criteria. In criterion A, subjects with current or past ocular disease and a best-corrected visual acuity less than 0.7 on a decimal visual acuity chart were excluded. In criterion B, among subjects who satisfied criterion A, those who had a congenital colour sense anomaly were excluded. Overall, the 90th percentile (95th percentile) CCI values for criteria A and B in the worse eye were 1.70 (1.95) and 1.59 (1.73), respectively. In subjects satisfying criterion B, the area under the receiver operating characteristic curve was 0.951 (95% confidence interval, 0.931-0.971). The CCI discrimination values of 1.52 or 1.63 showed 90.3% sensitivity and 90% specificity, or 71.5% sensitivity and 95% specificity, respectively, for discriminating acquired colour vision impairment (ACVI). We provided the 90th and 95th percentiles in a young to middle-aged healthy population. The CCI is in good agreement with the diagnosis of ACVI. Our results could be helpful for using D-15DS for screening purposes.
Disputes over moral status: philosophy and science in the future of bioethics.
Bortolotti, Lisa
2007-06-01
Various debates in bioethics have been focused on whether non-persons, such as marginal humans or non-human animals, deserve respectful treatment. It has been argued that, where we cannot agree on whether these individuals have moral status, we might agree that they have symbolic value and ascribe to them moral value in virtue of their symbolic significance. In the paper I resist the suggestion that symbolic value is relevant to ethical disputes in which the respect for individuals with no intrinsic moral value is in conflict with the interests of individuals with intrinsic moral value. I then turn to moral status and discuss the suitability of personhood as a criterion. There some desiderata for a criterion for moral status: it should be applicable on the basis of our current scientific knowledge; it should have a solid ethical justification; and it should be in line with some of our moral intuitions and social practices. Although it highlights an important connection between the possession of some psychological properties and eligibility for moral status, the criterion of personhood does not meet the desiderata above. I suggest that all intentional systems should be credited with moral status in virtue of having preferences and interests that are relevant to their well-being.
[Criterion Validity of the German Version of the CES-D in the General Population].
Jahn, Rebecca; Baumgartner, Josef S; van den Nest, Miriam; Friedrich, Fabian; Alexandrowicz, Rainer W; Wancata, Johannes
2018-04-17
The "Center of Epidemiologic Studies - Depression scale" (CES-D) is a well-known screening tool for depression. Until now the criterion validity of the German version of the CES-D was not investigated in a sample of the adult general population. 508 study participants of the Austrian general population completed the CES-D. ICD-10 diagnoses were established by using the Schedules for Clinical Assessment in Neuropsychiatry (SCAN). Receiver Operating Characteristics (ROC) analysis was conducted. Possible gender differences were explored. Overall discriminating performance of the CES-D was sufficient (ROC-AUC 0,836). Using the traditional cut-off values of 15/16 and 21/22 respectively the sensitivity was 43.2 % and 32.4 %, respectively. The cut-off value developed on the basis of our sample was 9/10 with a sensitivity of 81.1 % und a specificity of 74.3 %. There were no significant gender differences. This is the first study investigating the criterion validity of the German version of the CES-D in the general population. The optimal cut-off values yielded sufficient sensitivity and specificity, comparable to the values of other screening tools. © Georg Thieme Verlag KG Stuttgart · New York.
Pamukçu, Esra; Bozdogan, Hamparsum; Çalık, Sinan
2015-01-01
Gene expression data typically are large, complex, and highly noisy. Their dimension is high with several thousand genes (i.e., features) but with only a limited number of observations (i.e., samples). Although the classical principal component analysis (PCA) method is widely used as a first standard step in dimension reduction and in supervised and unsupervised classification, it suffers from several shortcomings in the case of data sets involving undersized samples, since the sample covariance matrix degenerates and becomes singular. In this paper we address these limitations within the context of probabilistic PCA (PPCA) by introducing and developing a new and novel approach using maximum entropy covariance matrix and its hybridized smoothed covariance estimators. To reduce the dimensionality of the data and to choose the number of probabilistic PCs (PPCs) to be retained, we further introduce and develop celebrated Akaike's information criterion (AIC), consistent Akaike's information criterion (CAIC), and the information theoretic measure of complexity (ICOMP) criterion of Bozdogan. Six publicly available undersized benchmark data sets were analyzed to show the utility, flexibility, and versatility of our approach with hybridized smoothed covariance matrix estimators, which do not degenerate to perform the PPCA to reduce the dimension and to carry out supervised classification of cancer groups in high dimensions. PMID:25838836
Validation of Cost-Effectiveness Criterion for Evaluating Noise Abatement Measures
DOT National Transportation Integrated Search
1999-04-01
This project will provide the Texas Department of Transportation (TxDOT)with information about the effects of the current cost-effectiveness criterion. The project has reviewed (1) the cost-effectiveness criteria used by other states, (2) the noise b...
Powell, Cormac; Carson, Brian P; Dowd, Kieran P; Donnelly, Alan E
2016-09-21
Activity monitors such as the SenseWear Pro3 (SWP3) and the activPAL3 Micro (aP 3 M) are regularly used by researchers and practitioners to provide estimates of the metabolic cost (METs) of activities in free-living settings. The purpose of this study is to examine the accuracy of the MET predictions from the SWP3 and the aP 3 M compared to the criterion standard MET values from indirect calorimetry. Fifty-six participants (mean age: 39.9 (±11.5), 25M/31F) performed eight activities (four daily living, three ambulatory and one cycling), while simultaneously wearing a SWP3, aP 3 M and the Cosmed K4B 2 (K4B 2 ) mobile metabolic unit. Paired samples T-tests were used to examine differences between device predicted METs and criterion METs. Bland-Altman plots were constructed to examine the mean bias and limits of agreement for predicted METs compared to criterion METs. SWP3 predicted MET values were significantly different from the K4B 2 for each activity (p ⩽ 0.004), excluding sweeping (p = 0.122). aP 3 M predicted MET values were significantly different (p < 0.001) from the K4B 2 for each activity. When examining the activities collectively, both devices underestimated activity intensity (0.20 METs (SWP3), 0.95 METs (aP 3 M)). The greatest mean bias for the SWP3 was for cycling (-3.25 METs), with jogging (-5.16 METs) producing the greatest mean bias for the aP 3 M. All of the activities (excluding SWP3 sweeping) were significantly different from the criterion measure. Although the SWP3 predicted METs are more accurate than their aP 3 M equivalent, the predicted MET values from both devices are significantly different from the criterion measure for the majority of activities.
Study of DNA binding sites using the Rényi parametric entropy measure.
Krishnamachari, A; moy Mandal, Vijnan; Karmeshu
2004-04-07
Shannon's definition of uncertainty or surprisal has been applied extensively to measure the information content of aligned DNA sequences and characterizing DNA binding sites. In contrast to Shannon's uncertainty, this study investigates the applicability and suitability of a parametric uncertainty measure due to Rényi. It is observed that this measure also provides results in agreement with Shannon's measure, pointing to its utility in analysing DNA binding site region. For facilitating the comparison between these uncertainty measures, a dimensionless quantity called "redundancy" has been employed. It is found that Rényi's measure at low parameter values possess a better delineating feature of binding sites (of binding regions) than Shannon's measure. The critical value of the parameter is chosen with an outlier criterion.
Mebane, C.A.
2010-01-01
Criteria to protect aquatic life are intended to protect diverse ecosystems, but in practice are usually developed from compilations of single-species toxicity tests using standard test organisms that were tested in laboratory environments. Species sensitivity distributions (SSDs) developed from these compilations are extrapolated to set aquatic ecosystem criteria. The protectiveness of the approach was critically reviewed with a chronic SSD for cadmium comprising 27 species within 21 genera. Within the data set, one genus had lower cadmium effects concentrations than the SSD fifth percentile-based criterion, so in theory this genus, the amphipod Hyalella, could be lost or at least allowed some level of harm by this criteria approach. However, population matrix modeling projected only slightly increased extinction risks for a temperate Hyalella population under scenarios similar to the SSD fifth percentile criterion. The criterion value was further compared to cadmium effects concentrations in ecosystem experiments and field studies. Generally, few adverse effects were inferred from ecosystem experiments at concentrations less than the SSD fifth percentile criterion. Exceptions were behavioral impairments in simplified food web studies. No adverse effects were apparent in field studies under conditions that seldom exceeded the criterion. At concentrations greater than the SSD fifth percentile, the magnitudes of adverse effects in the field studies were roughly proportional to the laboratory-based fraction of species with adverse effects in the SSD. Overall, the modeling and field validation comparisons of the chronic criterion values generally supported the relevance and protectiveness of the SSD fifth percentile approach with cadmium. ?? 2009 Society for Risk Analysis.
An evaluation system for financial compensation in traditional Chinese medicine services.
Dou, Lei; Yin, Ai-Tian; Hao, Mo; Lu, Jun
2015-10-01
To describe the major factors influencing financial compensation in traditional Chinese medicine (TCM) and prioritize what TCM services should be compensated for. Two structured questionnaires-a TCM service baseline questionnaire and a service cost questionnaire-were used to collect information from TCM public hospitals on TCM services provided in certain situations and service cost accounting. The cross-sectional study examined 110 TCM services provided in four county TCM public hospitals in Shandong province. From the questionnaire data, a screening index system was established via expert consultation and brainstorming. Comprehensive evaluation of TCM services was performed using the analytic hierarchy process method. Weighted coefficients were used to measure the importance of each criterion, after which comprehensive evaluation scores for each service were ranked to indicate what services should receive priority for financial compensation. Economy value, social value, and efficacy value were the three main criteria for screening for what TCM services should be compensated for. The economy value local weight had the highest value (0.588), of which the profit sub-criterion (0.278) was the most important for TCM financial compensation. Moxibustion was tied for the highest comprehensive evaluation scores, at 0.65 while Acupuncture and Massage Therapy were tied for the second and third highest, with 0.63 and 0.58, respectively. Government and policymakers should consider offer financial compensation to Moxibustion, Acupuncture, Massage Therapy, and TCM Orthopedics as priority services. In the meanwhile, it is essential to correct the unreasonable pricing, explore compensation methods, objects and payment, and revise and improve the accounting system for the costs of TCM services. Copyright © 2015 Elsevier Ltd. All rights reserved.
Late detection of hazards in traffic: A matter of response bias?
Egea-Caparrós, Damián-Amaro; García-Sevilla, Julia; Pedraja, María-José; Romero-Medina, Agustín; Marco-Cramer, María; Pineda-Egea, Laura
2016-09-01
In this study, results from two different hazard perception tests are presented: the first one is a classic hazard-perception test in which participants must respond - while watching real traffic video scenes - by pressing the space bar in a keyboard when they think there is a collision risk between the camera car and the vehicle ahead. In the second task we use fragments of the same scenes but in this case they are adapted to a signal detection task - a 'yes'/'no' task. Here, participants - most of them, University students - must respond, when the fragment of the video scene ends, whether they think the collision risk had started yet or not. While in the first task we have a latency measure (the time necessary for the driver to respond to a hazard), in the second task we obtain two separate measures of sensitivity and criterion. Sensitivity is the driver's ability to discriminate in a proper way the presence vs. absence of the signal (hazard) while the criterion is the response bias a driver sets to consider that there is a hazard or not. His/her criterion could be more conservative - the participant demands many cues to respond that the signal is present, neutral or even liberal - the participant will respond that the signal is present with very few cues. The aim of the study is to find out if our latency measure is associated with a different sensitivity and/or criterion. The results of the present study show that drivers who had greater latencies and drivers who had very low latencies yield a very similar sensitivity mean value. Nevertheless, there was a significant difference between these two groups of drivers in criterion: those drivers who had greater latencies in the first task were also more conservative in the second task. That is, the latter responded less frequently that there was danger in the sequences. We interpret that greater latencies in our first hazard perception test could be due to a stricter or more conservative criterion, rather than a low sensitivity to perceptual information for collision risk. Drivers with a more conservative criterion need more evidences of danger, thus taking longer to respond. Copyright © 2016 Elsevier Ltd. All rights reserved.
1978-09-01
iE ARI TECHNICAL REPORT S~ TR-78-A31 M CCriterion-Reforencod Loasurement In the Army: Development of a Research-Based, Practical, Test Construction ...conducted to develop a Criterion- 1 Referenced Tests (CRTs) Construction Manual. Major accomplishments were the preparation of a written review of the...survey of the literature on Criterion-Referenced Testing’ conducted in order to provide an information base for development of the CRT Construction
Model selection criterion in survival analysis
NASA Astrophysics Data System (ADS)
Karabey, Uǧur; Tutkun, Nihal Ata
2017-07-01
Survival analysis deals with time until occurrence of an event of interest such as death, recurrence of an illness, the failure of an equipment or divorce. There are various survival models with semi-parametric or parametric approaches used in medical, natural or social sciences. The decision on the most appropriate model for the data is an important point of the analysis. In literature Akaike information criteria or Bayesian information criteria are used to select among nested models. In this study,the behavior of these information criterion is discussed for a real data set.
On the Shapley Value of Unrooted Phylogenetic Trees.
Wicke, Kristina; Fischer, Mareike
2018-01-17
The Shapley value, a solution concept from cooperative game theory, has recently been considered for both unrooted and rooted phylogenetic trees. Here, we focus on the Shapley value of unrooted trees and first revisit the so-called split counts of a phylogenetic tree and the Shapley transformation matrix that allows for the calculation of the Shapley value from the edge lengths of a tree. We show that non-isomorphic trees may have permutation-equivalent Shapley transformation matrices and permutation-equivalent null spaces. This implies that estimating the split counts associated with a tree or the Shapley values of its leaves does not suffice to reconstruct the correct tree topology. We then turn to the use of the Shapley value as a prioritization criterion in biodiversity conservation and compare it to a greedy solution concept. Here, we show that for certain phylogenetic trees, the Shapley value may fail as a prioritization criterion, meaning that the diversity spanned by the top k species (ranked by their Shapley values) cannot approximate the total diversity of all n species.
New Stopping Criteria for Segmenting DNA Sequences
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Wentian
2001-06-18
We propose a solution on the stopping criterion in segmenting inhomogeneous DNA sequences with complex statistical patterns. This new stopping criterion is based on Bayesian information criterion in the model selection framework. When this criterion is applied to telomere of S.cerevisiae and the complete sequence of E.coli, borders of biologically meaningful units were identified, and a more reasonable number of domains was obtained. We also introduce a measure called segmentation strength which can be used to control the delineation of large domains. The relationship between the average domain size and the threshold of segmentation strength is determined for several genomemore » sequences.« less
Discriminant Validity Assessment: Use of Fornell & Larcker criterion versus HTMT Criterion
NASA Astrophysics Data System (ADS)
Hamid, M. R. Ab; Sami, W.; Mohmad Sidek, M. H.
2017-09-01
Assessment of discriminant validity is a must in any research that involves latent variables for the prevention of multicollinearity issues. Fornell and Larcker criterion is the most widely used method for this purpose. However, a new method has emerged for establishing the discriminant validity assessment through heterotrait-monotrait (HTMT) ratio of correlations method. Therefore, this article presents the results of discriminant validity assessment using these methods. Data from previous study was used that involved 429 respondents for empirical validation of value-based excellence model in higher education institutions (HEI) in Malaysia. From the analysis, the convergent, divergent and discriminant validity were established and admissible using Fornell and Larcker criterion. However, the discriminant validity is an issue when employing the HTMT criterion. This shows that the latent variables under study faced the issue of multicollinearity and should be looked into for further details. This also implied that the HTMT criterion is a stringent measure that could detect the possible indiscriminant among the latent variables. In conclusion, the instrument which consisted of six latent variables was still lacking in terms of discriminant validity and should be explored further.
2015-09-17
defined by the angle = ∗ at which the stress component σθθ(θ) takes the maximum value, according to Erdogan and Sih [10] (Figure 2.7). Thus, to...find the crack propagation direction according to Erdogan and Sih criterion, the following equation is used: 24 � =∗ = 0 (2.27...becomes: Erdogan and Sih criterion: ∗ = −2 KII KI + 4.6667 � KII KI � 3 + ⋯ (2.36) Sih Cha criterion: ∗ = −2 KII KI + 8.7271 � KII KI � 3
A Case for Transforming the Criterion of a Predictive Validity Study
ERIC Educational Resources Information Center
Patterson, Brian F.; Kobrin, Jennifer L.
2011-01-01
This study presents a case for applying a transformation (Box and Cox, 1964) of the criterion used in predictive validity studies. The goals of the transformation were to better meet the assumptions of the linear regression model and to reduce the residual variance of fitted (i.e., predicted) values. Using data for the 2008 cohort of first-time,…
A More Practical Pedagogical Ideal: Searching for a Criterion of Deweyan Growth
ERIC Educational Resources Information Center
Ralston, Shane Jesse
2011-01-01
When Dewey scholars and educational theorists appeal to the value of educative growth, what exactly do they mean? Is an individual's growth contingent on receiving a formal education? Is growth too abstract a goal for educators to pursue? Richard Rorty contended that the request for a "criterion of growth" is a mistake made by John Dewey's…
Some Results of Weak Anticipative Concept Applied in Simulation Based Decision Support in Enterprise
NASA Astrophysics Data System (ADS)
Kljajić, Miroljub; Kofjač, Davorin; Kljajić Borštnar, Mirjana; Škraba, Andrej
2010-11-01
The simulation models are used as for decision support and learning in enterprises and in schools. Tree cases of successful applications demonstrate usefulness of weak anticipative information. Job shop scheduling production with makespan criterion presents a real case customized flexible furniture production optimization. The genetic algorithm for job shop scheduling optimization is presented. Simulation based inventory control for products with stochastic lead time and demand describes inventory optimization for products with stochastic lead time and demand. Dynamic programming and fuzzy control algorithms reduce the total cost without producing stock-outs in most cases. Values of decision making information based on simulation were discussed too. All two cases will be discussed from optimization, modeling and learning point of view.
NASA Astrophysics Data System (ADS)
Kacalak, W.; Budniak, Z.; Majewski, M.
2018-02-01
The article presents a stability assessment method of the mobile crane handling system based on the safety indicator values that were accepted as the trajectory optimization criterion. With the use of the mathematical model built and the model built in the integrated CAD/CAE environment, analyses were conducted of the displacements of the mass centre of the crane system, reactions of the outrigger system, stabilizing and overturning torques that act on the crane as well as the safety indicator values for the given movement trajectories of the crane working elements.
Okariz, Ana; Guraya, Teresa; Iturrondobeitia, Maider; Ibarretxe, Julen
2017-12-01
A method is proposed and verified for selecting the optimum segmentation of a TEM reconstruction among the results of several segmentation algorithms. The selection criterion is the accuracy of the segmentation. To do this selection, a parameter for the comparison of the accuracies of the different segmentations has been defined. It consists of the mutual information value between the acquired TEM images of the sample and the Radon projections of the segmented volumes. In this work, it has been proved that this new mutual information parameter and the Jaccard coefficient between the segmented volume and the ideal one are correlated. In addition, the results of the new parameter are compared to the results obtained from another validated method to select the optimum segmentation. Copyright © 2017 Elsevier Ltd. All rights reserved.
Baltussen, Rob; Jansen, Maarten Paul Maria; Bijlmakers, Leon; Grutters, Janneke; Kluytmans, Anouck; Reuzel, Rob P; Tummers, Marcia; der Wilt, Gert Jan van
2017-02-01
Priority setting in health care has been long recognized as an intrinsically complex and value-laden process. Yet, health technology assessment agencies (HTAs) presently employ value assessment frameworks that are ill fitted to capture the range and diversity of stakeholder values and thereby risk compromising the legitimacy of their recommendations. We propose "evidence-informed deliberative processes" as an alternative framework with the aim to enhance this legitimacy. This framework integrates two increasingly popular and complementary frameworks for priority setting: multicriteria decision analysis and accountability for reasonableness. Evidence-informed deliberative processes are, on one hand, based on early, continued stakeholder deliberation to learn about the importance of relevant social values. On the other hand, they are based on rational decision-making through evidence-informed evaluation of the identified values. The framework has important implications for how HTA agencies should ideally organize their processes. First, HTA agencies should take the responsibility of organizing stakeholder involvement. Second, agencies are advised to integrate their assessment and appraisal phases, allowing for the timely collection of evidence on values that are considered relevant. Third, HTA agencies should subject their decision-making criteria to public scrutiny. Fourth, agencies are advised to use a checklist of potentially relevant criteria and to provide argumentation for how each criterion affected the recommendation. Fifth, HTA agencies must publish their argumentation and install options for appeal. The framework should not be considered a blueprint for HTA agencies but rather an aspirational goal-agencies can take incremental steps toward achieving this goal. Copyright © 2017 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
DOE Office of Scientific and Technical Information (OSTI.GOV)
Farrell-Maupin, Kathryn; Oden, J. T.
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Adaptive selection and validation of models of complex systems in the presence of uncertainty
Farrell-Maupin, Kathryn; Oden, J. T.
2017-08-01
This study describes versions of OPAL, the Occam-Plausibility Algorithm in which the use of Bayesian model plausibilities is replaced with information theoretic methods, such as the Akaike Information Criterion and the Bayes Information Criterion. Applications to complex systems of coarse-grained molecular models approximating atomistic models of polyethylene materials are described. All of these model selection methods take into account uncertainties in the model, the observational data, the model parameters, and the predicted quantities of interest. A comparison of the models chosen by Bayesian model selection criteria and those chosen by the information-theoretic criteria is given.
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-01-01
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods. PMID:27754386
Fault Diagnosis Based on Chemical Sensor Data with an Active Deep Neural Network.
Jiang, Peng; Hu, Zhixin; Liu, Jun; Yu, Shanen; Wu, Feng
2016-10-13
Big sensor data provide significant potential for chemical fault diagnosis, which involves the baseline values of security, stability and reliability in chemical processes. A deep neural network (DNN) with novel active learning for inducing chemical fault diagnosis is presented in this study. It is a method using large amount of chemical sensor data, which is a combination of deep learning and active learning criterion to target the difficulty of consecutive fault diagnosis. DNN with deep architectures, instead of shallow ones, could be developed through deep learning to learn a suitable feature representation from raw sensor data in an unsupervised manner using stacked denoising auto-encoder (SDAE) and work through a layer-by-layer successive learning process. The features are added to the top Softmax regression layer to construct the discriminative fault characteristics for diagnosis in a supervised manner. Considering the expensive and time consuming labeling of sensor data in chemical applications, in contrast to the available methods, we employ a novel active learning criterion for the particularity of chemical processes, which is a combination of Best vs. Second Best criterion (BvSB) and a Lowest False Positive criterion (LFP), for further fine-tuning of diagnosis model in an active manner rather than passive manner. That is, we allow models to rank the most informative sensor data to be labeled for updating the DNN parameters during the interaction phase. The effectiveness of the proposed method is validated in two well-known industrial datasets. Results indicate that the proposed method can obtain superior diagnosis accuracy and provide significant performance improvement in accuracy and false positive rate with less labeled chemical sensor data by further active learning compared with existing methods.
Entropic criterion for model selection
NASA Astrophysics Data System (ADS)
Tseng, Chih-Yuan
2006-10-01
Model or variable selection is usually achieved through ranking models according to the increasing order of preference. One of methods is applying Kullback-Leibler distance or relative entropy as a selection criterion. Yet that will raise two questions, why use this criterion and are there any other criteria. Besides, conventional approaches require a reference prior, which is usually difficult to get. Following the logic of inductive inference proposed by Caticha [Relative entropy and inductive inference, in: G. Erickson, Y. Zhai (Eds.), Bayesian Inference and Maximum Entropy Methods in Science and Engineering, AIP Conference Proceedings, vol. 707, 2004 (available from arXiv.org/abs/physics/0311093)], we show relative entropy to be a unique criterion, which requires no prior information and can be applied to different fields. We examine this criterion by considering a physical problem, simple fluids, and results are promising.
Using MRSA Screening Tests To Predict Methicillin Resistance in Staphylococcus aureus Bacteremia.
Butler-Laporte, Guillaume; Cheng, Matthew P; Cheng, Alexandre P; McDonald, Emily G; Lee, Todd C
2016-12-01
Bloodstream infections with Staphylococcus aureus are clinically significant and are often treated with empirical methicillin resistance (MRSA, methicillin-resistant S. aureus) coverage. However, vancomycin has associated harms. We hypothesized that MRSA screening correlated with resistance in S. aureus bacteremia and could help determine the requirement for empirical vancomycin therapy. We reviewed consecutive S. aureus bacteremias over a 5-year period at two tertiary care hospitals. MRSA colonization was evaluated in three ways: as tested within 30 days of bacteremia (30-day criterion), as tested within 30 days but accounting for any prior positive results (ever-positive criterion), or as tested in known-positive patients, with patients with unknown MRSA status being labeled negative (known-positive criterion). There were 409 S. aureus bacteremias: 302 (73.8%) methicillin-susceptible S. aureus (MSSA) and 107 (26.2%) MRSA bacteremias. In the 167 patients with MSSA bacteremias, 7.2% had a positive MRSA test within 30 days. Of 107 patients with MRSA bacteremia, 68 were tested within 30 days (54 positive; 79.8%), and another 21 (19.6%) were previously positive. The 30-day criterion provided negative predictive values (NPV) exceeding 90% and 95% if the prevalence of MRSA in S. aureus bacteremia was less than 33.4% and 19.2%, respectively. The same NPVs were predicted at MRSA proportions below 39.7% and 23.8%, respectively, for the ever-positive criterion and 34.4% and 19.9%, respectively, for the known-positive criterion. In MRSA-colonized patients, positive predictive values exceeded 50% at low prevalence. MRSA screening could help avoid empirical vancomycin therapy and its complications in stable patients and settings with low-to-moderate proportions of MRSA bacteremia. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
González-Viñas, M A; Caballero, A B; Gallego, I; García Ruiz, A
2004-08-01
The physico-chemical, rheological and sensory characteristics of different commercially available Frankfurters were studied. Samples presented values of A(w) and pH from 0.954 to 0.972 and 5.88 to 6.43, respectively. Greater differences were observed in parameters such as fat and salt content, with values ranging from 10.83% to 21.92% and 1.85% to 3.01%, respectively. With regard to total nitrogen, all samples presented values close to 2%. Free-choice profiling and generalised procrustes analysis of the sensory data permitted differentiation between samples and provided information about the attributes responsible for the observed differences. All the frankfurters scored in the moderate range for overall acceptability. Consumers identified reasons for purchasing frankfurters when evaluating the product's packaging. The most important criterion for consumers when purchasing frankfurters was the appetising aspect of the product in the packaging's illustration.
Water-sediment controversy in setting environmental standards for selenium
Hamilton, Steven J.; Lemly, A. Dennis
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 μg/L. This commentary will present information to support a national water quality criterion for selenium of 2 μg/L, based on a wide array of support from federal, state, university, and international sources. Recently, two articles have argued for a sediment-based criterion and presented a model for deriving site-specific criteria. In one example, they calculate a criterion of 31 μg/L for a stream with a low sediment selenium toxicity threshold and low site-specific sediment total organic carbon content, which is substantially higher than the national criterion of 5 μg/L. Their basic premise for proposing a sediment-based method has been critically reviewed and problems in their approach are discussed.
14 CFR 255.4 - Display of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... and the weight given to each criterion and the specifications used by the system's programmers in constructing the algorithm. (c) Systems shall not use any factors directly or indirectly relating to carrier...” connecting flights; and (iv) The weight given to each criterion in paragraphs (c)(3)(ii) and (iii) of this...
Jung, Sung-Hoon; Kwon, Oh-Yun; Jeon, In-Cheol; Hwang, Ui-Jae; Weon, Jong-Hyuck
2018-01-01
The purposes of this study were to determine the intra-rater test-retest reliability of a smart phone-based measurement tool (SBMT) and a three-dimensional (3D) motion analysis system for measuring the transverse rotation angle of the pelvis during single-leg lifting (SLL) and the criterion validity of the transverse rotation angle of the pelvis measurement using SBMT compared with a 3D motion analysis system (3DMAS). Seventeen healthy volunteers performed SLL with their dominant leg without bending the knee until they reached a target placed 20 cm above the table. This study used a 3DMAS, considered the gold standard, to measure the transverse rotation angle of the pelvis to assess the criterion validity of the SBMT measurement. Intra-rater test-retest reliability was determined using the SBMT and 3DMAS using intra-class correlation coefficient (ICC) [3,1] values. The criterion validity of the SBMT was assessed with ICC [3,1] values. Both the 3DMAS (ICC = 0.77) and SBMT (ICC = 0.83) showed excellent intra-rater test-retest reliability in the measurement of the transverse rotation angle of the pelvis during SLL in a supine position. Moreover, the SBMT showed an excellent correlation with the 3DMAS (ICC = 0.99). Measurement of the transverse rotation angle of the pelvis using the SBMT showed excellent reliability and criterion validity compared with the 3DMAS.
ERIC Educational Resources Information Center
Sánchez-Rosas, Javier; Furlan, Luis Alberto
2017-01-01
Based on the control-value theory of achievement emotions and theory of achievement goals, this research provides evidence of convergent, divergent, and criterion validity of the Spanish Cognitive Test Anxiety Scale (S-CTAS). A sample of Argentinean undergraduates responded to several scales administered at three points. At time 1 and 3, the…
Jaman, Ajmery; Latif, Mahbub A H M; Bari, Wasimul; Wahed, Abdus S
2016-05-20
In generalized estimating equations (GEE), the correlation between the repeated observations on a subject is specified with a working correlation matrix. Correct specification of the working correlation structure ensures efficient estimators of the regression coefficients. Among the criteria used, in practice, for selecting working correlation structure, Rotnitzky-Jewell, Quasi Information Criterion (QIC) and Correlation Information Criterion (CIC) are based on the fact that if the assumed working correlation structure is correct then the model-based (naive) and the sandwich (robust) covariance estimators of the regression coefficient estimators should be close to each other. The sandwich covariance estimator, used in defining the Rotnitzky-Jewell, QIC and CIC criteria, is biased downward and has a larger variability than the corresponding model-based covariance estimator. Motivated by this fact, a new criterion is proposed in this paper based on the bias-corrected sandwich covariance estimator for selecting an appropriate working correlation structure in GEE. A comparison of the proposed and the competing criteria is shown using simulation studies with correlated binary responses. The results revealed that the proposed criterion generally performs better than the competing criteria. An example of selecting the appropriate working correlation structure has also been shown using the data from Madras Schizophrenia Study. Copyright © 2015 John Wiley & Sons, Ltd.
Consequences of gestational diabetes in an urban hospital in Viet Nam: a prospective cohort study.
Hirst, Jane E; Tran, Thach S; Do, My An T; Morris, Jonathan M; Jeffery, Heather E
2012-01-01
Gestational diabetes mellitus (GDM) is increasing and is a risk for type 2 diabetes. Evidence supporting screening comes mostly from high-income countries. We aimed to determine prevalence and outcomes in urban Viet Nam. We compared the proposed International Association of the Diabetes and Pregnancy Study Groups (IADPSG) criterion, requiring one positive value on the 75-g glucose tolerance test, to the 2010 American Diabetes Association (ADA) criterion, requiring two positive values. We conducted a prospective cohort study in Ho Chi Minh City, Viet Nam. Study participants were 2,772 women undergoing routine prenatal care who underwent a 75-g glucose tolerance test and interview around 28 (range 24-32) wk. GDM diagnosed by the ADA criterion was treated by local protocol. Women with GDM by the IADPSG criterion but not the ADA criterion were termed "borderline" and received standard care. 2,702 women (97.5% of cohort) were followed until discharge after delivery. GDM was diagnosed in 164 participants (6.1%) by the ADA criterion, 550 (20.3%) by the IADPSG criterion. Mean body mass index was 20.45 kg/m(2) in women with out GDM, 21.10 in women with borderline GDM, and 21.81 in women with GDM, p<0.001. Women with GDM and borderline GDM were more likely to deliver preterm, with adjusted odds ratios (aORs) of 1.49 (95% CI 1.16-1.91) and 1.52 (1.03-2.24), respectively. They were more likely to have clinical neonatal hypoglycaemia, aORs of 4.94 (3.41-7.14) and 3.34 (1.41-7.89), respectively. For large for gestational age, the aORs were 1.16 (0.93-1.45) and 1.31 (0.96-1.79), respectively. There was no significant difference in large for gestational age, death, severe birth trauma, or maternal morbidity between the groups. Women with GDM underwent more labour inductions, aOR 1.51 (1.08-2.11). Choice of criterion greatly affects GDM prevalence in Viet Nam. Women with GDM by the IADPSG criterion were at risk of preterm delivery and neonatal hypoglycaemia, although this criterion resulted in 20% of pregnant women being positive for GDM. The ability to cope with such a large number of cases and prevent associated adverse outcomes needs to be demonstrated before recommending widespread screening. Please see later in the article for the Editors' Summary.
Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini
2016-01-01
The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike’s information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (−2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values. PMID:26954176
Padilha, Alessandro Haiduck; Cobuci, Jaime Araujo; Costa, Cláudio Napolis; Neto, José Braccini
2016-06-01
The aim of this study was to compare two random regression models (RRM) fitted by fourth (RRM4) and fifth-order Legendre polynomials (RRM5) with a lactation model (LM) for evaluating Holstein cattle in Brazil. Two datasets with the same animals were prepared for this study. To apply test-day RRM and LMs, 262,426 test day records and 30,228 lactation records covering 305 days were prepared, respectively. The lowest values of Akaike's information criterion, Bayesian information criterion, and estimates of the maximum of the likelihood function (-2LogL) were for RRM4. Heritability for 305-day milk yield (305MY) was 0.23 (RRM4), 0.24 (RRM5), and 0.21 (LM). Heritability, additive genetic and permanent environmental variances of test days on days in milk was from 0.16 to 0.27, from 3.76 to 6.88 and from 11.12 to 20.21, respectively. Additive genetic correlations between test days ranged from 0.20 to 0.99. Permanent environmental correlations between test days were between 0.07 and 0.99. Standard deviations of average estimated breeding values (EBVs) for 305MY from RRM4 and RRM5 were from 11% to 30% higher for bulls and around 28% higher for cows than that in LM. Rank correlations between RRM EBVs and LM EBVs were between 0.86 to 0.96 for bulls and 0.80 to 0.87 for cows. Average percentage of gain in reliability of EBVs for 305-day yield increased from 4% to 17% for bulls and from 23% to 24% for cows when reliability of EBVs from RRM models was compared to those from LM model. Random regression model fitted by fourth order Legendre polynomials is recommended for genetic evaluations of Brazilian Holstein cattle because of the higher reliability in the estimation of breeding values.
Bürger, W; Streibelt, M
2015-02-01
Stepwise Occupational Reintegration (SOR) measures are of growing importance for the German statutory pension insurance. There is moderate evidence that patients with a poor prognosis in terms of a successful return to work, profit most from SOR measures. However, it is not clear to what extend these information are utilized when recommending SOR to a patient. A questionnaire was sent to 40406 persons (up to 59 years old, excluding rehabilitation after hospital stay) before admission to a medical rehabilitation service. The survey data were matched with data from the discharge report and information on the participation in a SOR measure. Initially, a single criterion was defined which describes the need of SOR measures. This criterion is based on 3 different items: patients with at least 12 weeks sickness absence, (a) a SIBAR score>7 and/or (b) a perceived need of SOR.The main aspect of our analyses was to describe the association between the SOR need-criterion and the participation in SOR measures as well as between the predictors of SOR participation when fulfilling the SOR need-criterion. The analyses were based on a multiple logistic regression model. For 16408 patients full data were available. The formal prerequisites for SOR were given for 33% of the sample, out of which 32% received a SOR after rehabilitation and 43% fulfilled the SOR needs criterion. A negative relationship between these 2 categories was observed (phi=-0.08, p<0.01). For patients that fulfilled the need-criterion the probability for participating in SOR decreased by 22% (RR=0.78). The probability of SOR participation increased with a decreasing SIBAR score (OR=0.56) and in patients who showed more confidence in being able be return to work. Participation in SOR measures cannot be predicted by the empirically defined SOR need-criterion: the probability even decreased when fulfilling the criterion. Furthermore, the results of a multivariate analysis show a positive selection of the patients who participate in SOR measures. Our results point strongly to the need of an indication guideline for physicians in rehabilitation centres. Further research addressing the success of SOR measures have to show whether the information used in this case can serve as a base for such a guideline. © Georg Thieme Verlag KG Stuttgart · New York.
Assessment of selenium effects in lotic ecosystems
Hamilton, Steven J.; Palace, Vince
2001-01-01
The selenium literature has grown substantially in recent years to encompass new information in a variety of areas. Correspondingly, several different approaches to establishing a new water quality criterion for selenium have been proposed since establishment of the national water quality criterion in 1987. Diverging viewpoints and interpretations of the selenium literature have lead to opposing perspectives on issues such as establishing a national criterion based on a sediment-based model, using hydrologic units to set criteria for stream reaches, and applying lentic-derived effects to lotic environments. This Commentary presents information on the lotic verse lentic controversy. Recently, an article was published that concluded that no adverse effects were occurring in a cutthroat trout population in a coldwater river with elevated selenium concentrations (C. J. Kennedy, L. E. McDonald, R. Loveridge, and M. M. Strosher, 2000, Arch. Environ. Contam. Toxicol. 39, 46–52). This article has added to the controversy rather than provided further insight into selenium toxicology. Information, or rather missing information, in the article has been critically reviewed and problems in the interpretations are discussed.
Parrozzani, Raffaele; Clementi, Maurizio; Frizziero, Luisa; Miglionico, Giacomo; Perrini, Pierdavide; Cavarzeran, Fabiano; Kotsafti, Olympia; Comacchio, Francesco; Trevisson, Eva; Convento, Enrica; Fusetti, Stefano; Midena, Edoardo
2015-09-01
To evaluate the feasibility of near-infrared (NIR) imaging acquisition in a large sample of consecutive pediatric patients with neurofibromatosis type 1 (NF1), to evaluate the diagnostic performance of NF1-related choroidal abnormalities as a diagnostic criterion of the disease, and to compare this criterion with other standard National Institutes of Health (NIH) diagnostic criteria. A total of 140 consecutive pediatric patients (0-16 years old) affected by NF1 (at least two diagnostic criteria), 59 suspected (a single diagnostic criterion), and 42 healthy subjects (no diagnostic criterion) were consecutively included. Each patient underwent genetic, dermatologic, and ophthalmologic examination to evaluate the presence/absence of each NIH diagnostic criterion. The presence of NF1-related choroidal abnormalities was investigated using NIR confocal ophthalmoscopy. Two masked operators assessed Lisch nodules and NF1-related choroidal abnormalities. Neurofibromatosis type 1-related choroidal abnormalities were detected in 72 affected (60.5%) and 1 suspected (2.4%) child. No healthy subject had choroidal abnormalities. Feasibility rate of this sign was 82%. Sensitivity, specificity, and positive and negative predictive values of NF1-related choroidal abnormalities were 0.60, 0.97, 0.98, and 0.46, respectively. Compared with standard NIH criteria, the presence of NF1-related choroidal abnormalities was the third parameter for positive predictive value and the fourth for sensitivity, specificity, and negative predictive value. Compared with Lisch nodules, NF1-related choroidal abnormalities were characterized by higher specificity and positive predictive value. The interoperator agreement for Lisch nodules and NF1-related choroidal abnormalities was 0.67 (substantial) and 0.97 (almost perfect), respectively. The use of this sign moved one patient from the suspected to the affected group (0.5%). Neurofibromatosis type 1-related choroidal abnormalities represent a new diagnostic sign in NF1 children. The main advantage of this sign seems the theoretical possibility to anticipate NF1 diagnosis, whereas the main obstacle is the cooperation required by very young patients.
Parametric optimization of optical signal detectors employing the direct photodetection scheme
NASA Astrophysics Data System (ADS)
Kirakosiants, V. E.; Loginov, V. A.
1984-08-01
The problem of optimization of the optical signal detection scheme parameters is addressed using the concept of a receiver with direct photodetection. An expression is derived which accurately approximates the field of view (FOV) values obtained by a direct computer minimization of the probability of missing a signal; optimum values of the receiver FOV were found for different atmospheric conditions characterized by the number of coherence spots and the intensity fluctuations of a plane wave. It is further pointed out that the criterion presented can be possibly used for parametric optimization of detectors operating in accordance with the Neumann-Pearson criterion.
Larios, Adam; Petersen, Mark R.; Titi, Edriss S.; ...
2017-04-29
We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Larios, Adam; Petersen, Mark R.; Titi, Edriss S.
We report the results of a computational investigation of two blow-up criteria for the 3D incompressible Euler equations. One criterion was proven in a previous work, and a related criterion is proved here. These criteria are based on an inviscid regularization of the Euler equations known as the 3D Euler-Voigt equations, which are known to be globally well-posed. Moreover, simulations of the 3D Euler-Voigt equations also require less resolution than simulations of the 3D Euler equations for xed values of the regularization parameter α > 0. Therefore, the new blow-up criteria allow one to gain information about possible singularity formationmore » in the 3D Euler equations indirectly; namely, by simulating the better-behaved 3D Euler-Voigt equations. The new criteria are only known to be suficient for blow-up. Therefore, to test the robustness of the inviscid-regularization approach, we also investigate analogous criteria for blow-up of the 1D Burgers equation, where blow-up is well-known to occur.« less
Multi-Informant Assessment of Temperament in Children with Externalizing Behavior Problems
ERIC Educational Resources Information Center
Copeland, William; Landry, Kerry; Stanger, Catherine; Hudziak, James J.
2004-01-01
We examined the criterion validity of parent and self-report versions of the Junior Temperament and Character Inventory (JTCI) in children with high levels of externalizing problems. The sample included 412 children (206 participants and 206 siblings) participating in a family study of attention and aggressive behavior problems. Criterion validity…
ERIC Educational Resources Information Center
Messick, Samuel
Cognitive styles--defined as information processing habits--should be considered as a criterion variable in the evaluation of instruction. Research findings identify the characteristics of different cognitive stles. Used in educational practice and evaluation, cognitive styles would be new process variables extending the assessment of mental…
The Validity of the Instructional Reading Level.
ERIC Educational Resources Information Center
Powell, William R.
Presented is a critical inquiry about the product of the informal reading inventory (IRI) and about some of the elements used in the process of determining that product. Recent developments on this topic are briefly reviewed. Questions are raised concerning what is a suitable criterion level for word recognition. The original criterion of 95…
Determination Of Slitting Criterion Parameter During The Multi Slit Rolling Process
NASA Astrophysics Data System (ADS)
Stefanik, Andrzej; Mróz, Sebastian; Szota, Piotr; Dyja, Henryk
2007-05-01
The rolling of rods with slitting of the strip calls for the use of special mathematical models that would allow for the separating of metal. A theoretical analysis of the effect of the gap of slitting rollers on the process of band slitting during the rolling of 20 mm and 16 mm-diameter ribbed rods rolled according to the two-strand technology was carried out within this study. For the numerical modeling of strip slitting the Forge3® computer program was applied. The strip slitting in the simulation is implemented by the algorithm of removing elements in which the critical value of the normalized Cockroft - Latham criterion has been exceeded. To determine the value of the criterion the inverse method was applied. Distance between a point, where crack begins, and point of contact metal with the slitting rollers was the parameter for analysis. Power and rolling torque during slit rolling were presented. Distribution and change of the stress in strand while slitting were presented.
Heuristic algorithms for the minmax regret flow-shop problem with interval processing times.
Ćwik, Michał; Józefczyk, Jerzy
2018-01-01
An uncertain version of the permutation flow-shop with unlimited buffers and the makespan as a criterion is considered. The investigated parametric uncertainty is represented by given interval-valued processing times. The maximum regret is used for the evaluation of uncertainty. Consequently, the minmax regret discrete optimization problem is solved. Due to its high complexity, two relaxations are applied to simplify the optimization procedure. First of all, a greedy procedure is used for calculating the criterion's value, as such calculation is NP-hard problem itself. Moreover, the lower bound is used instead of solving the internal deterministic flow-shop. The constructive heuristic algorithm is applied for the relaxed optimization problem. The algorithm is compared with previously elaborated other heuristic algorithms basing on the evolutionary and the middle interval approaches. The conducted computational experiments showed the advantage of the constructive heuristic algorithm with regards to both the criterion and the time of computations. The Wilcoxon paired-rank statistical test confirmed this conclusion.
Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay
2016-04-01
Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
ERIC Educational Resources Information Center
Daly-Smith, Andy J. W.; McKenna, Jim; Radley, Duncan; Long, Jonathan
2011-01-01
Objective: To investigate the value of additional days of active commuting for meeting a criterion of 300+ minutes of moderate-to-vigorous physical activity (MVPA; 60+ mins/day x 5) during the school week. Methods: Based on seven-day diaries supported by teachers, binary logistic regression analyses were used to predict achievement of MVPA…
Translation and validation of the Canadian diabetes risk assessment questionnaire in China.
Guo, Jia; Shi, Zhengkun; Chen, Jyu-Lin; Dixon, Jane K; Wiley, James; Parry, Monica
2018-01-01
To adapt the Canadian Diabetes Risk Assessment Questionnaire for the Chinese population and to evaluate its psychometric properties. A cross-sectional study was conducted with a convenience sample of 194 individuals aged 35-74 years from October 2014 to April 2015. The Canadian Diabetes Risk Assessment Questionnaire was adapted and translated for the Chinese population. Test-retest reliability was conducted to measure stability. Criterion and convergent validity of the adapted questionnaire were assessed using 2-hr 75 g oral glucose tolerance tests and the Finnish Diabetes Risk Scores, respectively. Sensitivity and specificity were evaluated to establish its predictive validity. The test-retest reliability was 0.988. Adequate validity of the adapted questionnaire was demonstrated by positive correlations found between the scores and 2-hr 75 g oral glucose tolerance tests (r = .343, p < .001) and with the Finnish Diabetes Risk Scores (r = .738, p < .001). The area under receiver operating characteristic curve was 0.705 (95% CI .632, .778), demonstrating moderate diagnostic value at a cutoff score of 30. The sensitivity was 73%, with a positive predictive value of 57% and negative predictive value of 78%. Our results provided evidence supporting the translation consistency, content validity, convergent validity, criterion validity, sensitivity, and specificity of the translated Canadian Diabetes Risk Assessment Questionnaire with minor modifications. This paper provides clinical, practical, and methodological information on how to adapt a diabetes risk calculator between cultures for public health nurses. © 2017 Wiley Periodicals, Inc.
Accounting for uncertainty in health economic decision models by using model averaging.
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-04-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment.
NASA Astrophysics Data System (ADS)
Ning, Fangkun; Jia, Weitao; Hou, Jian; Chen, Xingrui; Le, Qichi
2018-05-01
Various fracture criteria, especially Johnson and Cook (J-C) model and (normalized) Cockcroft and Latham (C-L) criterion were contrasted and discussed. Based on normalized C-L criterion, adopted in this paper, FE simulation was carried out and hot rolling experiments under temperature range of 200 °C–350 °C, rolling reduction rate of 25%–40% and rolling speed from 7–21 r/min was implemented. The microstructure was observed by optical microscope and damage values of simulation results were contrasted with the length of cracks on diverse parameters. The results show that the plate generated less edge cracks and the microstructure emerged slight shear bands and fine dynamic recrystallization grains rolled at 350 °C, 40% reduction and 14 r/min. The edge cracks pre-criterion model was obtained combined with Zener-Hollomon equation and deformation activation energy.
Pouwels, J Loes; Lansu, Tessa A M; Cillessen, Antonius H N
2016-01-01
This study had three goals. First, we examined the prevalence of the participant roles of bullying in middle adolescence and possible gender differences therein. Second, we examined the behavioral and status characteristics associated with the participant roles in middle adolescence. Third, we compared two sets of criteria for assigning students to the participant roles of bullying. Participants were 1,638 adolescents (50.9% boys, M(age) = 16.38 years, SD =.80) who completed the shortened participant role questionnaire and peer nominations for peer status and behavioral characteristics. Adolescents were assigned to the participant roles according to the relative criteria of Salmivalli, Lagerspetz, Björkqvist, Österman, and Kaukiainen (1996). Next, the students in each role were divided in two subgroups based on an additional absolute criterion: the Relative Only Criterion subgroup (nominated by less than 10% of their classmates) and the Absolute & Relative Criterion subgroup (nominated by at least 10% of their classmates). Adolescents who bullied or reinforced or assisted bullies were highly popular and disliked and scored high on peer-valued characteristics. Adolescents who were victimized held the weakest social position in the peer group. Adolescents who defended victims were liked and prosocial, but average in popularity and peer-valued characteristics. Outsiders held a socially weak position in the peer group, but were less disliked, less aggressive, and more prosocial than victims. The behavior and status profiles of adolescents in the participant roles were more extreme for the Absolute & Relative Criterion subgroup than for the Relative Only Criterion subgroup. © 2015 Wiley Periodicals, Inc.
NASA Astrophysics Data System (ADS)
Wu, Li; Adoko, Amoussou Coffi; Li, Bo
2018-04-01
In tunneling, determining quantitatively the rock mass strength parameters of the Hoek-Brown (HB) failure criterion is useful since it can improve the reliability of the design of tunnel support systems. In this study, a quantitative method is proposed to determine the rock mass quality parameters of the HB failure criterion, namely the Geological Strength Index (GSI) and the disturbance factor ( D) based on the structure of drilling core and weathering condition of rock mass combined with acoustic wave test to calculate the strength of rock mass. The Rock Mass Structure Index and the Rock Mass Weathering Index are used to quantify the GSI while the longitudinal wave velocity ( V p) is employed to derive the value of D. The DK383+338 tunnel face of Yaojia tunnel of Shanghai-Kunming passenger dedicated line served as illustration of how the methodology is implemented. The values of the GSI and D are obtained using the HB criterion and then using the proposed method. The measured in situ stress is used to evaluate their accuracy. To this end, the major and minor principal stresses are calculated based on the GSI and D given by HB criterion and the proposed method. The results indicated that both methods were close to the field observation which suggests that the proposed method can be used for determining quantitatively the rock quality parameters, as well. However, these results remain valid only for rock mass quality and rock type similar to those of the DK383+338 tunnel face of Yaojia tunnel.
Modeling of cw OIL energy performance based on similarity criteria
NASA Astrophysics Data System (ADS)
Mezhenin, Andrey V.; Pichugin, Sergey Y.; Azyazov, Valeriy N.
2012-01-01
A simplified two-level generation model predicts that power extraction from an cw oxygen-iodine laser (OIL) with stable resonator depends on three similarity criteria. Criterion τd is the ratio of the residence time of active medium in the resonator to the O2(1Δ) reduction time at the infinitely large intraresonator intensity. Criterion Π is small-signal gain to the threshold ratio. Criterion Λ is the relaxation to excitation rate ratio for the electronically excited iodine atoms I(2P1/2). Effective power extraction from a cw OIL is achieved when the values of the similarity criteria are located in the intervals: τd=5-8, Π=3-8 and Λ<=0.01.
ERIC Educational Resources Information Center
Meredith, Keith E.; Sabers, Darrell L.
Data required for evaluating a Criterion Referenced Measurement (CRM) is described with a matrix. The information within the matrix consists of the "pass-fail" decisions of two CRMs. By differentially defining these two CRMs, different concepts of reliability and validity can be examined. Indices suggested for analyzing the matrix are listed with…
Water-Sediment Controversy in Setting Environmental Standards for Selenium
Steven J. Hamilton; A. Dennis Lemly
1999-01-01
A substantial amount of laboratory and field research on selenium effects to biota has been accomplished since the national water quality criterion was published for selenium in 1987. Many articles have documented adverse effects on biota at concentrations below the current chronic criterion of 5 µg/L. This commentary will present information to support a national...
The Development of a Criterion Instrument for Counselor Selection.
ERIC Educational Resources Information Center
Remer, Rory; Sease, William
A measure of potential performance as a counselor is needed as an adjunct to the information presently employed in selection decisions. This article deals with one possible method of development of such a potential performance criterion and the steps taken, to date, in the attempt to validate it. It includes: the overall effectiveness of the…
ERIC Educational Resources Information Center
Tibbetts, Katherine A.; And Others
This paper describes the development of a criterion-referenced, performance-based measure of third grade reading comprehension. The primary purpose of the assessment is to contribute unique and valid information for use in the formative evaluation of a whole literacy program. A secondary purpose is to supplement other program efforts to…
ERIC Educational Resources Information Center
Hirschi, Andreas
2009-01-01
Interest differentiation and elevation are supposed to provide important information about a person's state of interest development, yet little is known about their development and criterion validity. The present study explored these constructs among a group of Swiss adolescents. Study 1 applied a cross-sectional design with 210 students in 11th…
A Humanistic Approach to Criterion Referenced Testing.
ERIC Educational Resources Information Center
Wilson, H. A.
Test construction is not the strictly logical process that we might wish it to be. This is particularly true in a large on-going project such as the National Assessment of Educational Progress (NAEP). Most of the really deep questions can only be answered by the exercise of well-informed human judgment. Criterion-referenced testing is still a term…
Changing the criterion for memory conformity in free recall and recognition.
Wright, Daniel B; Gabbert, Fiona; Memon, Amina; London, Kamala
2008-02-01
People's responses during memory studies are affected by what other people say. This memory conformity effect has been shown in both free recall and recognition. Here we examine whether accurate, inaccurate, and suggested answers are affected similarly when the response criterion is varied. In the first study, participants saw four pictures of detailed scenes and then discussed the content of these scenes with another participant who saw the same scenes, but with a couple of details changed. Participants were either told to recall everything they could and not to worry about making mistakes (lenient), or only to recall items if they were sure that they were accurate (strict). The strict instructions reduced the amount of inaccurate information reported that the other person suggested, but also reduced the number of accurate details recalled. In the second study, participants were shown a large set of faces and then their memory recognition was tested with a confederate on these and fillers. Here also, the criterion manipulation shifted both accurate and inaccurate responses, and those suggested by the confederate. The results are largely consistent with a shift in response criterion affecting accurate, inaccurate, and suggested information. In addition we varied the level of secrecy in the participants' responses. The effects of secrecy were complex and depended on the level of response criterion. Implications for interviewing eyewitnesses and line-ups are discussed.
Lindström, Martin; Axén, Elin; Lindström, Christine; Beckman, Anders; Moghaddassi, Mahnaz; Merlo, Juan
2006-12-01
The aim of this study was to investigate the influence of contextual (social capital and administrative/neo-materialist) and individual factors on lack of access to a regular doctor. The 2000 public health survey in Scania is a cross-sectional study. A total of 13,715 persons answered a postal questionnaire, which is 59% of the random sample. A multilevel logistic regression model, with individuals at the first level and municipalities at the second, was performed. The effect (intra-class correlations, cross-level modification and odds ratios) of individual and municipality (social capital and health care district) factors on lack of access to a regular doctor was analysed using simulation method. The Deviance Information Criterion (DIC) was used as information criterion for the models. The second level municipality variance in lack of access to a regular doctor is substantial even in the final models with all individual and contextual variables included. The model that results in the largest reduction in DIC is the model including age, sex and individual social participation (which is a network aspect of social capital), but the models which include administrative and social capital second level factors also reduced the DIC values. This study suggests that both administrative health care district and social capital may partly explain the individual's self reported lack of access to a regular doctor.
Beymer, Matthew R; Weiss, Robert E; Sugar, Catherine A; Bourque, Linda B; Gee, Gilbert C; Morisky, Donald E; Shu, Suzanne B; Javanbakht, Marjan; Bolan, Robert K
2017-01-01
Preexposure prophylaxis (PrEP) has emerged as a human immunodeficiency virus (HIV) prevention tool for populations at highest risk for HIV infection. Current US Centers for Disease Control and Prevention (CDC) guidelines for identifying PrEP candidates may not be specific enough to identify gay, bisexual, and other men who have sex with men (MSM) at the highest risk for HIV infection. We created an HIV risk score for HIV-negative MSM based on Syndemics Theory to develop a more targeted criterion for assessing PrEP candidacy. Behavioral risk assessment and HIV testing data were analyzed for HIV-negative MSM attending the Los Angeles LGBT Center between January 2009 and June 2014 (n = 9481). Syndemics Theory informed the selection of variables for a multivariable Cox proportional hazards model. Estimated coefficients were summed to create an HIV risk score, and model fit was compared between our model and CDC guidelines using the Akaike Information Criterion and Bayesian Information Criterion. Approximately 51% of MSM were above a cutpoint that we chose as an illustrative risk score to qualify for PrEP, identifying 75% of all seroconverting MSM. Our model demonstrated a better overall fit when compared with the CDC guidelines (Akaike Information Criterion Difference = 68) in addition to identifying a greater proportion of HIV infections. Current CDC PrEP guidelines should be expanded to incorporate substance use, partner-level, and other Syndemic variables that have been shown to contribute to HIV acquisition. Deployment of such personalized algorithms may better hone PrEP criteria and allow providers and their patients to make a more informed decision prior to PrEP use.
A new statistical framework to assess structural alignment quality using information compression
Collier, James H.; Allison, Lloyd; Lesk, Arthur M.; Garcia de la Banda, Maria; Konagurthu, Arun S.
2014-01-01
Motivation: Progress in protein biology depends on the reliability of results from a handful of computational techniques, structural alignments being one. Recent reviews have highlighted substantial inconsistencies and differences between alignment results generated by the ever-growing stock of structural alignment programs. The lack of consensus on how the quality of structural alignments must be assessed has been identified as the main cause for the observed differences. Current methods assess structural alignment quality by constructing a scoring function that attempts to balance conflicting criteria, mainly alignment coverage and fidelity of structures under superposition. This traditional approach to measuring alignment quality, the subject of considerable literature, has failed to solve the problem. Further development along the same lines is unlikely to rectify the current deficiencies in the field. Results: This paper proposes a new statistical framework to assess structural alignment quality and significance based on lossless information compression. This is a radical departure from the traditional approach of formulating scoring functions. It links the structural alignment problem to the general class of statistical inductive inference problems, solved using the information-theoretic criterion of minimum message length. Based on this, we developed an efficient and reliable measure of structural alignment quality, I-value. The performance of I-value is demonstrated in comparison with a number of popular scoring functions, on a large collection of competing alignments. Our analysis shows that I-value provides a rigorous and reliable quantification of structural alignment quality, addressing a major gap in the field. Availability: http://lcb.infotech.monash.edu.au/I-value Contact: arun.konagurthu@monash.edu Supplementary information: Online supplementary data are available at http://lcb.infotech.monash.edu.au/I-value/suppl.html PMID:25161241
Valuing information for sewer replacement decisions.
van Riel, Wouter; Langeveld, Jeroen; Herder, Paulien; Clemens, François
Decision-making for sewer asset management is partially based on intuition and often lacks explicit argumentation, hampering decision transparency and reproducibility. This is not to be preferred in light of public accountability and cost-effectiveness. It is unknown to what extent each decision criterion is appreciated by decision-makers. Further insight into this relative importance improves understanding of decision-making of sewer system managers. As such, a digital questionnaire (response ratio 43%), containing pairwise comparisons between 10 relevant information sources, was sent to all 407 municipalities in the Netherlands to analyse the relative importance and assess whether a shared frame of reasoning is present. Thurstone's law of comparative judgment was used for analysis, combined with several consistency tests. Results show that camera inspections were valued highest, while pipe age was considered least important. The respondents were pretty consistent per individual and also showed consistency as a group. This indicated a common framework of reasoning among the group. The feedback of the group showed, however, the respondents found it difficult to make general comparisons without having a context. This indicates decision-making in practice is more likely to be steered by other mechanisms than purely combining information sources.
Accounting for uncertainty in health economic decision models by using model averaging
Jackson, Christopher H; Thompson, Simon G; Sharples, Linda D
2009-01-01
Health economic decision models are subject to considerable uncertainty, much of which arises from choices between several plausible model structures, e.g. choices of covariates in a regression model. Such structural uncertainty is rarely accounted for formally in decision models but can be addressed by model averaging. We discuss the most common methods of averaging models and the principles underlying them. We apply them to a comparison of two surgical techniques for repairing abdominal aortic aneurysms. In model averaging, competing models are usually either weighted by using an asymptotically consistent model assessment criterion, such as the Bayesian information criterion, or a measure of predictive ability, such as Akaike's information criterion. We argue that the predictive approach is more suitable when modelling the complex underlying processes of interest in health economics, such as individual disease progression and response to treatment. PMID:19381329
NASA Astrophysics Data System (ADS)
Pakpahan, N. F. D. B.
2018-01-01
All articles must contain an abstract. The research methodology is a subject in which the materials must be understood by the students who will take the thesis. Implementation of learning should create the conditions for active learning, interactive and effective are called Team Assisted Individualization (TAI) cooperative learning. The purpose of this study: 1) improving student learning outcomes at the course research methodology on TAI cooperative learning. 2) improvement of teaching activities. 3) improvement of learning activities. This study is a classroom action research conducted at the Department of Civil Engineering Universitas Negeri Surabaya. The research subjects were 30 students and lecturer of courses. Student results are complete in the first cycle by 20 students (67%) and did not complete 10 students (33%). In the second cycle students who complete being 26 students (87%) and did not complete 4 students (13%). There is an increase in learning outcomes by 20%. Results of teaching activities in the first cycle obtained the value of 3.15 with the criteria enough well. In the second cycle obtained the value of 4.22 with good criterion. The results of learning activities in the first cycle obtained the value of 3.05 with enough criterion. In the second cycle was obtained 3.95 with good criterion.
NASA Astrophysics Data System (ADS)
Perekhodtseva, E. V.
2009-09-01
Development of successful method of forecast of storm winds, including squalls and tornadoes and heavy rainfalls, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal precipitation and wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 150x150km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. In order to change to the alternative forecast the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. In the accordance to the Pirsey-Obukhov criterion (T), the success of these automated statistical methods of forecast of squalls and tornadoes to 36 -48 hours ahead and heavy rainfalls in the warm season for the territory of Italy, Spain and Balkan countries is T = 1-a-b=0,54: 0,78 after author experiments. A lot of examples of very successful forecasts of summer storm wind and heavy rainfalls over the Italy and Spain territory are submitted at this report. The same decisive rules were applied to the forecast of these phenomena during cold period in this year too. This winter heavy snowfalls in Spain and in Italy and storm wind at this territory were observed very often. And our forecasts are successful.
NASA Astrophysics Data System (ADS)
Tauscher, Keith; Rapetti, David; Burns, Jack O.; Switzer, Eric
2018-02-01
The sky-averaged (global) highly redshifted 21 cm spectrum from neutral hydrogen is expected to appear in the VHF range of ∼20–200 MHz and its spectral shape and strength are determined by the heating properties of the first stars and black holes, by the nature and duration of reionization, and by the presence or absence of exotic physics. Measurements of the global signal would therefore provide us with a wealth of astrophysical and cosmological knowledge. However, the signal has not yet been detected because it must be seen through strong foregrounds weighted by a large beam, instrumental calibration errors, and ionospheric, ground, and radio-frequency-interference effects, which we collectively refer to as “systematics.” Here, we present a signal extraction method for global signal experiments which uses Singular Value Decomposition of “training sets” to produce systematics basis functions specifically suited to each observation. Instead of requiring precise absolute knowledge of the systematics, our method effectively requires precise knowledge of how the systematics can vary. After calculating eigenmodes for the signal and systematics, we perform a weighted least square fit of the corresponding coefficients and select the number of modes to include by minimizing an information criterion. We compare the performance of the signal extraction when minimizing various information criteria and find that minimizing the Deviance Information Criterion most consistently yields unbiased fits. The methods used here are built into our widely applicable, publicly available Python package, pylinex, which analytically calculates constraints on signals and systematics from given data, errors, and training sets.
Spatiotemporal coding in the cortex: information flow-based learning in spiking neural networks.
Deco, G; Schürmann, B
1999-05-15
We introduce a learning paradigm for networks of integrate-and-fire spiking neurons that is based on an information-theoretic criterion. This criterion can be viewed as a first principle that demonstrates the experimentally observed fact that cortical neurons display synchronous firing for some stimuli and not for others. The principle can be regarded as the postulation of a nonparametric reconstruction method as optimization criteria for learning the required functional connectivity that justifies and explains synchronous firing for binding of features as a mechanism for spatiotemporal coding. This can be expressed in an information-theoretic way by maximizing the discrimination ability between different sensory inputs in minimal time.
Complex symmetric matrices with strongly stable iterates
NASA Technical Reports Server (NTRS)
Tadmor, E.
1985-01-01
Complex-valued symmetric matrices are studied. A simple expression for the spectral norm of such matrices is obtained, by utilizing a unitarily congruent invariant form. A sharp criterion is provided for identifying those symmetric matrices whose spectral norm is not exceeding one: such strongly stable matrices are usually sought in connection with convergent difference approximations to partial differential equations. As an example, the derived criterion is applied to conclude the strong stability of a Lax-Wendroff scheme.
Varying the valuating function and the presentable bank in computerized adaptive testing.
Barrada, Juan Ramón; Abad, Francisco José; Olea, Julio
2011-05-01
In computerized adaptive testing, the most commonly used valuating function is the Fisher information function. When the goal is to keep item bank security at a maximum, the valuating function that seems most convenient is the matching criterion, valuating the distance between the estimated trait level and the point where the maximum of the information function is located. Recently, it has been proposed not to keep the same valuating function constant for all the items in the test. In this study we expand the idea of combining the matching criterion with the Fisher information function. We also manipulate the number of strata into which the bank is divided. We find that the manipulation of the number of items administered with each function makes it possible to move from the pole of high accuracy and low security to the opposite pole. It is possible to greatly improve item bank security with much fewer losses in accuracy by selecting several items with the matching criterion. In general, it seems more appropriate not to stratify the bank.
Tseng, Yi-Ju; Wu, Jung-Hsuan; Ping, Xiao-Ou; Lin, Hui-Chi; Chen, Ying-Yu; Shang, Rung-Ji; Chen, Ming-Yuan; Lai, Feipei
2012-01-01
Background The emergence and spread of multidrug-resistant organisms (MDROs) are causing a global crisis. Combating antimicrobial resistance requires prevention of transmission of resistant organisms and improved use of antimicrobials. Objectives To develop a Web-based information system for automatic integration, analysis, and interpretation of the antimicrobial susceptibility of all clinical isolates that incorporates rule-based classification and cluster analysis of MDROs and implements control chart analysis to facilitate outbreak detection. Methods Electronic microbiological data from a 2200-bed teaching hospital in Taiwan were classified according to predefined criteria of MDROs. The numbers of organisms, patients, and incident patients in each MDRO pattern were presented graphically to describe spatial and time information in a Web-based user interface. Hierarchical clustering with 7 upper control limits (UCL) was used to detect suspicious outbreaks. The system’s performance in outbreak detection was evaluated based on vancomycin-resistant enterococcal outbreaks determined by a hospital-wide prospective active surveillance database compiled by infection control personnel. Results The optimal UCL for MDRO outbreak detection was the upper 90% confidence interval (CI) using germ criterion with clustering (area under ROC curve (AUC) 0.93, 95% CI 0.91 to 0.95), upper 85% CI using patient criterion (AUC 0.87, 95% CI 0.80 to 0.93), and one standard deviation using incident patient criterion (AUC 0.84, 95% CI 0.75 to 0.92). The performance indicators of each UCL were statistically significantly higher with clustering than those without clustering in germ criterion (P < .001), patient criterion (P = .04), and incident patient criterion (P < .001). Conclusion This system automatically identifies MDROs and accurately detects suspicious outbreaks of MDROs based on the antimicrobial susceptibility of all clinical isolates. PMID:23195868
March, F.A.; Dwyer, F.J.; Augspurger, T.; Ingersoll, C.G.; Wang, N.; Mebane, C.A.
2007-01-01
The state of Oklahoma has designated several areas as freshwater mussel sanctuaries in an attempt to provide freshwater mussel species a degree of protection and to facilitate their reproduction. We evaluated the protection afforded freshwater mussels by the U.S. Environmental Protection Agency (U.S. EPA) hardness-based 1996 ambient copper water quality criteria, the 2007 U.S. EPA water quality criteria based on the biotic ligand model and the 2005 state of Oklahoma copper water quality standards. Both the criterion maximum concentration and criterion continuous concentration were evaluated. Published acute and chronic copper toxicity data that met American Society for Testing and Materials guidance for test acceptability were obtained for exposures conducted with glochidia or juvenile freshwater mussels. We tabulated toxicity data for glochidia and juveniles to calculate 20 species mean acute values for freshwater mussels. Generally, freshwater mussel species mean acute values were similar to those of the more sensitive species included in the U.S. EPA water quality derivation database. When added to the database of genus mean acute values used in deriving 1996 copper water quality criteria, 14 freshwater mussel genus mean acute values included 10 of the lowest 15 genus mean acute values, with three mussel species having the lowest values. Chronic exposure and sublethal effects freshwater mussel data available for four species and acute to chronic ratios were used to evaluate the criterion continuous concentration. On the basis of the freshwater mussel toxicity data used in this assessment, the hardness-based 1996 U.S. EPA water quality criteria, the 2005 Oklahoma water quality standards, and the 2007 U.S. EPA water quality criteria based on the biotic ligand model might need to be revised to afford protection to freshwater mussels. ?? 2007 SETAC.
Setting Priorities in Global Child Health Research Investments: Addressing Values of Stakeholders
Kapiriri, Lydia; Tomlinson, Mark; Gibson, Jennifer; Chopra, Mickey; El Arifeen, Shams; Black, Robert E.; Rudan, Igor
2007-01-01
Aim To identify main groups of stakeholders in the process of health research priority setting and propose strategies for addressing their systems of values. Methods In three separate exercises that took place between March and June 2006 we interviewed three different groups of stakeholders: 1) members of the global research priority setting network; 2) a diverse group of national-level stakeholders from South Africa; and 3) participants at the conference related to international child health held in Washington, DC, USA. Each of the groups was administered different version of the questionnaire in which they were asked to set weights to criteria (and also minimum required thresholds, where applicable) that were a priori defined as relevant to health research priority setting by the consultants of the Child Health and Nutrition Research initiative (CHNRI). Results At the global level, the wide and diverse group of respondents placed the greatest importance (weight) to the criterion of maximum potential for disease burden reduction, while the most stringent threshold was placed on the criterion of answerability in an ethical way. Among the stakeholders’ representatives attending the international conference, the criterion of deliverability, answerability, and sustainability of health research results was proposed as the most important one. At the national level in South Africa, the greatest weight was placed on the criterion addressing the predicted impact on equity of the proposed health research. Conclusions Involving a large group of stakeholders when setting priorities in health research investments is important because the criteria of relevance to scientists and technical experts, whose knowledge and technical expertise is usually central to the process, may not be appropriate to specific contexts and in accordance with the views and values of those who invest in health research, those who benefit from it, or wider society as a whole. PMID:17948948
Thermodynamic criteria for estimating the kinetic parameters of catalytic reactions
NASA Astrophysics Data System (ADS)
Mitrichev, I. I.; Zhensa, A. V.; Kol'tsova, E. M.
2017-01-01
Kinetic parameters are estimated using two criteria in addition to the traditional criterion that considers the consistency between experimental and modeled conversion data: thermodynamic consistency and the consistency with entropy production (i.e., the absolute rate of the change in entropy due to exchange with the environment is consistent with the rate of entropy production in the steady state). A special procedure is developed and executed on a computer to achieve the thermodynamic consistency of a set of kinetic parameters with respect to both the standard entropy of a reaction and the standard enthalpy of a reaction. A problem of multi-criterion optimization, reduced to a single-criterion problem by summing weighted values of the three criteria listed above, is solved. Using the reaction of NO reduction with CO on a platinum catalyst as an example, it is shown that the set of parameters proposed by D.B. Mantri and P. Aghalayam gives much worse agreement with experimental values than the set obtained on the basis of three criteria: the sum of the squares of deviations for conversion, the thermodynamic consistency, and the consistency with entropy production.
Large Area Crop Inventory Experiment (LACIE). YES phase 1 yield feasibility report
NASA Technical Reports Server (NTRS)
1977-01-01
The author has identified the following significant results. Each state model was separately evaluated to determine if a projected performance to the country level would satisfy a 90/90 criterion. All state models, except the North Dakota and Kansas models, satisfied that criterion both for district estimates aggregated to the state level and for state estimates directly from the models. In addition to the tests of the 90/90 criterion, the models were examined for their ability to adequately respond to fluctuations in weather. This portion of the analysis was based on a subjective interpretation of values of certain description statistics. As a result, 10 of the 12 models were judged to respond inadequately to variation in weather-related variables.
Lee, Meng-Chih; Hsu, Chih-Cheng; Tsai, Yi-Fen; Chen, Ching-Yu; Lin, Cheng-Chieh; Wang, Ching-Yi
Current evidence suggests that grip strength and usual gait speed (UGS) are important predictors of instrumental activities of daily living (IADL) disability. Knowing the optimum cut points of these tests for discriminating people with and without IADL disability could help clinicians or researchers to better interpret the test results and make medical decisions. The purpose of this study was to determine the cutoff values of grip strength and UGS for best discriminating community-dwelling older adults with and without IADL disability, separately for men and women, and to investigate their association with IADL disability. We conducted secondary data analysis on a national dataset collected in the Sarcopenia and Translational Aging Research in Taiwan (START). The data used in this study consisted of health data of 2420 community-dwelling older adults 65 years and older with no history of stroke and with complete data. IADL disability was defined as at least 1 IADL item scored as "need help" or "unable to perform." Receiver operating characteristics analysis was used to estimate the optimum grip strength and UGS cut points for best discriminating older adults with/without IADL disability. The association between each physical performance (grip strength and UGS) and IADL disability was assessed with odds ratios (ORs). With IADL disability as the criterion, the optimal cutoff values of grip strength were 28.7 kg for men and 16.0 kg for women, and those for UGS were 0.76 m/s for men and 0.66 m/s for women. The grip strength test showed satisfactory discriminant validity (area under the curve > 0.7) in men and a strong association with IADL disability (OR > 4). Our cut points using IADL disability as the criterion were close to those indicating frailty or sarcopenia. Our reported cutoffs can serve as criterion-referenced values, along with those previously determined using different indicators, and provide important landmarks on the performance continua of older adults' grip strength and UGS. These landmarks could be useful in interpreting test results, monitoring changes in performance, and identifying individuals requiring timely intervention. For identifying older adults at risk of IADL disability, grip strength is superior to UGS.
Three-Dimensional Dynamic Rupture in Brittle Solids and the Volumetric Strain Criterion
NASA Astrophysics Data System (ADS)
Uenishi, K.; Yamachi, H.
2017-12-01
As pointed out by Uenishi (2016 AGU Fall Meeting), source dynamics of ordinary earthquakes is often studied in the framework of 3D rupture in brittle solids but our knowledge of mechanics of actual 3D rupture is limited. Typically, criteria derived from 1D frictional observations of sliding materials or post-failure behavior of solids are applied in seismic simulations, and although mode-I cracks are frequently encountered in earthquake-induced ground failures, rupture in tension is in most cases ignored. Even when it is included in analyses, the classical maximum principal tensile stress rupture criterion is repeatedly used. Our recent basic experiments of dynamic rupture of spherical or cylindrical monolithic brittle solids by applying high-voltage electric discharge impulses or impact loads have indicated generation of surprisingly simple and often flat rupture surfaces in 3D specimens even without the initial existence of planes of weakness. However, at the same time, the snapshots taken by a high-speed digital video camera have shown rather complicated histories of rupture development in these 3D solid materials, which seem to be difficult to be explained by, for example, the maximum principal stress criterion. Instead, a (tensile) volumetric strain criterion where the volumetric strain (dilatation or the first invariant of the strain tensor) is a decisive parameter for rupture seems more effective in computationally reproducing the multi-directionally propagating waves and rupture. In this study, we try to show the connection between this volumetric strain criterion and other classical rupture criteria or physical parameters employed in continuum mechanics, and indicate that the criterion has, to some degree, physical meanings. First, we mathematically illustrate that the criterion is equivalent to a criterion based on the mean normal stress, a crucial parameter in plasticity. Then, we mention the relation between the volumetric strain criterion and the failure envelope of the Mohr-Coulomb criterion that describes shear-related rupture. The critical value of the volumetric strain for rupture may be controlled by the apparent cohesion and apparent angle of internal friction of the Mohr-Coulomb criterion.
Bulk hydrodynamic stability and turbulent saturation in compressing hot spots
NASA Astrophysics Data System (ADS)
Davidovits, Seth; Fisch, Nathaniel J.
2018-04-01
For hot spots compressed at constant velocity, we give a hydrodynamic stability criterion that describes the expected energy behavior of non-radial hydrodynamic motion for different classes of trajectories (in ρR — T space). For a given compression velocity, this criterion depends on ρR, T, and d T /d (ρR ) (the trajectory slope) and applies point-wise so that the expected behavior can be determined instantaneously along the trajectory. Among the classes of trajectories are those where the hydromotion is guaranteed to decrease and those where the hydromotion is bounded by a saturated value. We calculate this saturated value and find the compression velocities for which hydromotion may be a substantial fraction of hot-spot energy at burn time. The Lindl (Phys. Plasmas 2, 3933 (1995)] "attractor" trajectory is shown to experience non-radial hydrodynamic energy that grows towards this saturated state. Comparing the saturation value with the available detailed 3D simulation results, we find that the fluctuating velocities in these simulations reach substantial fractions of the saturated value.
Effects of ecstasy/polydrug use on memory for associative information.
Gallagher, Denis T; Fisk, John E; Montgomery, Catharine; Judge, Jeannie; Robinson, Sarita J; Taylor, Paul J
2012-08-01
Associative learning underpins behaviours that are fundamental to the everyday functioning of the individual. Evidence pointing to learning deficits in recreational drug users merits further examination. A word pair learning task was administered to examine associative learning processes in ecstasy/polydrug users. After assignment to either single or divided attention conditions, 44 ecstasy/polydrug users and 48 non-users were presented with 80 word pairs at encoding. Following this, four types of stimuli were presented at the recognition phase: the words as originally paired (old pairs), previously presented words in different pairings (conjunction pairs), old words paired with new words, and pairs of new words (not presented previously). The task was to identify which of the stimuli were intact old pairs. Ecstasy/ploydrug users produced significantly more false-positive responses overall compared to non-users. Increased long-term frequency of ecstasy use was positively associated with the propensity to produce false-positive responses. It was also associated with a more liberal signal detection theory decision criterion value. Measures of long term and recent cannabis use were also associated with these same word pair learning outcome measures. Conjunction word pairs, irrespective of drug use, generated the highest level of false-positive responses and significantly more false-positive responses were made in the divided attention condition compared to the single attention condition. Overall, the results suggest that long-term ecstasy exposure may induce a deficit in associative learning and this may be in part a consequence of users adopting a more liberal decision criterion value.
Time series ARIMA models for daily price of palm oil
NASA Astrophysics Data System (ADS)
Ariff, Noratiqah Mohd; Zamhawari, Nor Hashimah; Bakar, Mohd Aftar Abu
2015-02-01
Palm oil is deemed as one of the most important commodity that forms the economic backbone of Malaysia. Modeling and forecasting the daily price of palm oil is of great interest for Malaysia's economic growth. In this study, time series ARIMA models are used to fit the daily price of palm oil. The Akaike Infromation Criterion (AIC), Akaike Infromation Criterion with a correction for finite sample sizes (AICc) and Bayesian Information Criterion (BIC) are used to compare between different ARIMA models being considered. It is found that ARIMA(1,2,1) model is suitable for daily price of crude palm oil in Malaysia for the year 2010 to 2012.
Design of dry-friction dampers for turbine blades
NASA Technical Reports Server (NTRS)
Ancona, W.; Dowell, E. H.
1983-01-01
A study is conducted of turbine blade forced response, where the blade has been modeled as a cantilever beam with a generally dry friction damper attached, and where the minimization of blade root strain as the excitation frequency is varied over a given range is the criterion for the evaluation of the effectiveness of the dry friction damper. Attempts are made to determine the location of the damper configuration best satisfying the design criterion, together with the best damping force (assuming that the damper location has been fixed). Results suggest that there need not be an optimal value for the damping force, or an optimal location for the dry friction damper, although there is a range of values which should be avoided.
ERIC Educational Resources Information Center
Shriver, Edgar L.; And Others
This document furnishes a complete copy of the Test Subject's Instructions and the Test Administrator's Handbook for a battery of criterion referenced Job Task Performance Tests (JTPT) for electronic maintenance. General information is provided on soldering, Radar Set AN/APN-147(v), Radar Set Special Equipment, Radar Set Bench Test Set-Up, and…
Bayes' Theorem: An Old Tool Applicable to Today's Classroom Measurement Needs. ERIC/AE Digest.
ERIC Educational Resources Information Center
Rudner, Lawrence M.
This digest introduces ways of responding to the call for criterion-referenced information using Bayes' Theorem, a method that was coupled with criterion-referenced testing in the early 1970s (see R. Hambleton and M. Novick, 1973). To illustrate Bayes' Theorem, an example is given in which the goal is to classify an examinee as being a master or…
Thoughts on Information Literacy and the 21st Century Workplace.
ERIC Educational Resources Information Center
Beam, Walter R.
2001-01-01
Discusses changes in society that have led to literacy skills being a criterion for employment. Topics include reading; communication skills; writing; cognitive processes; math; computers, the Internet, and the information revolution; information needs and access; information cross-linking; information literacy; and hardware and software use. (LRW)
Kletting, P; Schimmel, S; Kestler, H A; Hänscheid, H; Luster, M; Fernández, M; Bröer, J H; Nosske, D; Lassmann, M; Glatting, G
2013-10-01
Calculation of the time-integrated activity coefficient (residence time) is a crucial step in dosimetry for molecular radiotherapy. However, available software is deficient in that it is either not tailored for the use in molecular radiotherapy and/or does not include all required estimation methods. The aim of this work was therefore the development and programming of an algorithm which allows for an objective and reproducible determination of the time-integrated activity coefficient and its standard error. The algorithm includes the selection of a set of fitting functions from predefined sums of exponentials and the choice of an error model for the used data. To estimate the values of the adjustable parameters an objective function, depending on the data, the parameters of the error model, the fitting function and (if required and available) Bayesian information, is minimized. To increase reproducibility and user-friendliness the starting values are automatically determined using a combination of curve stripping and random search. Visual inspection, the coefficient of determination, the standard error of the fitted parameters, and the correlation matrix are provided to evaluate the quality of the fit. The functions which are most supported by the data are determined using the corrected Akaike information criterion. The time-integrated activity coefficient is estimated by analytically integrating the fitted functions. Its standard error is determined assuming Gaussian error propagation. The software was implemented using MATLAB. To validate the proper implementation of the objective function and the fit functions, the results of NUKFIT and SAAM numerical, a commercially available software tool, were compared. The automatic search for starting values was successfully tested for reproducibility. The quality criteria applied in conjunction with the Akaike information criterion allowed the selection of suitable functions. Function fit parameters and their standard error estimated by using SAAM numerical and NUKFIT showed differences of <1%. The differences for the time-integrated activity coefficients were also <1% (standard error between 0.4% and 3%). In general, the application of the software is user-friendly and the results are mathematically correct and reproducible. An application of NUKFIT is presented for three different clinical examples. The software tool with its underlying methodology can be employed to objectively and reproducibly estimate the time integrated activity coefficient and its standard error for most time activity data in molecular radiotherapy.
Nissensohn, Mariela; Fuentes Lugo, Daniel; Serra-Majem, Lluis
2016-07-13
Recommendations of adequate total water intake (aTWI) have been proposed by the European Food Safety Agency (EFSA) and the Institute of Medicine (IOM)of the United States of America. However, there are differences in the approach used to support them: IOM recommendation is based on average intakes observed in NHANES III (Third National Health and Nutrition Examination Survey) and EFSA recommendation on a combination of observed intakes from 13 different European countries. Despite these recommendations of aTWI, the currently available scientifi c evidence is not sufficient to establish a cut-off value that would prevent disease, reduce the risk for chronic diseases or improve health status. To compare the average daily consumption of fluids (water and other beverages) in selective samples of population from Mexico, US and Spain, evaluating the quantity of fluid intake and understanding the contribution of each fluid type to the total fl uid intake. We also aim to determine if they reached adequate intake (AI) values, as defi ned by three different criteria: IOM, EFSA and water density. Three studies were compared: from Mexico, the National Health and Nutrition Survey conducted in 2012 (NHNS 2012); from US, the NHANES III 2005-2010 and from Spain the ANIBES study leaded in 2013. Different categories of beverages were used to establish the pattern of energy intake for each country. Only adult population was selected. TWI of each study was compared with EFSA and IOM AI recommendations, as well as applying the criterion of water density (mL/kcal). The American study obtained the higher value of total kcal/day from food and beverages (2,437 ± 13). Furthermore, the percentage of daily energy intake coming from beverages was, for American adults, 21%. Mexico was slightly behind with 19% and Spain ANIBES study registered only 12%. ANIBES showed signifi cantly low AI values for the overall population, but even more alarming in the case of males. Only 12% of men, in contrast with 21% of women, do satisfy the EFSA criterion. The IOM criterion reaches even less with higher recommended values for daily intake. In contrast, 60% of the American population reached the recommended intake of the IOM criterion. However, available data did not allow calculating the percentage reached by the EFSA criterion. Data from the Mexican study did not permit conducting comparisons with IOM or with EFSA. However, the water density criteria (mL/kcal) was higher than 1. There is a notable difference between all three populations in terms of TWI. Furthermore, within the same population, values of adequacy of TWI changed signifi cantly when they were assessed using different criteria. More scientifi c evidence is required for the production of better defined water intake recommendations in the future as well as more studies focusing on beverage consumption patterns in different settings.
Decision Criterion Dynamics in Animals Performing an Auditory Detection Task
Mill, Robert W.; Alves-Pinto, Ana; Sumner, Christian J.
2014-01-01
Classical signal detection theory attributes bias in perceptual decisions to a threshold criterion, against which sensory excitation is compared. The optimal criterion setting depends on the signal level, which may vary over time, and about which the subject is naïve. Consequently, the subject must optimise its threshold by responding appropriately to feedback. Here a series of experiments was conducted, and a computational model applied, to determine how the decision bias of the ferret in an auditory signal detection task tracks changes in the stimulus level. The time scales of criterion dynamics were investigated by means of a yes-no signal-in-noise detection task, in which trials were grouped into blocks that alternately contained easy- and hard-to-detect signals. The responses of the ferrets implied both long- and short-term criterion dynamics. The animals exhibited a bias in favour of responding “yes” during blocks of harder trials, and vice versa. Moreover, the outcome of each single trial had a strong influence on the decision at the next trial. We demonstrate that the single-trial and block-level changes in bias are a manifestation of the same criterion update policy by fitting a model, in which the criterion is shifted by fixed amounts according to the outcome of the previous trial and decays strongly towards a resting value. The apparent block-level stabilisation of bias arises as the probabilities of outcomes and shifts on single trials mutually interact to establish equilibrium. To gain an intuition into how stable criterion distributions arise from specific parameter sets we develop a Markov model which accounts for the dynamic effects of criterion shifts. Our approach provides a framework for investigating the dynamics of decisions at different timescales in other species (e.g., humans) and in other psychological domains (e.g., vision, memory). PMID:25485733
Ternès, Nils; Rotolo, Federico; Michiels, Stefan
2016-07-10
Correct selection of prognostic biomarkers among multiple candidates is becoming increasingly challenging as the dimensionality of biological data becomes higher. Therefore, minimizing the false discovery rate (FDR) is of primary importance, while a low false negative rate (FNR) is a complementary measure. The lasso is a popular selection method in Cox regression, but its results depend heavily on the penalty parameter λ. Usually, λ is chosen using maximum cross-validated log-likelihood (max-cvl). However, this method has often a very high FDR. We review methods for a more conservative choice of λ. We propose an empirical extension of the cvl by adding a penalization term, which trades off between the goodness-of-fit and the parsimony of the model, leading to the selection of fewer biomarkers and, as we show, to the reduction of the FDR without large increase in FNR. We conducted a simulation study considering null and moderately sparse alternative scenarios and compared our approach with the standard lasso and 10 other competitors: Akaike information criterion (AIC), corrected AIC, Bayesian information criterion (BIC), extended BIC, Hannan and Quinn information criterion (HQIC), risk information criterion (RIC), one-standard-error rule, adaptive lasso, stability selection, and percentile lasso. Our extension achieved the best compromise across all the scenarios between a reduction of the FDR and a limited raise of the FNR, followed by the AIC, the RIC, and the adaptive lasso, which performed well in some settings. We illustrate the methods using gene expression data of 523 breast cancer patients. In conclusion, we propose to apply our extension to the lasso whenever a stringent FDR with a limited FNR is targeted. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Zhang, Xujun; Pang, Yuanyuan; Cui, Mengjing; Stallones, Lorann; Xiang, Huiyun
2015-02-01
Road traffic injuries have become a major public health problem in China. This study aimed to develop statistical models for predicting road traffic deaths and to analyze seasonality of deaths in China. A seasonal autoregressive integrated moving average (SARIMA) model was used to fit the data from 2000 to 2011. Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were used to evaluate the constructed models. Autocorrelation function and partial autocorrelation function of residuals and Ljung-Box test were used to compare the goodness-of-fit between the different models. The SARIMA model was used to forecast monthly road traffic deaths in 2012. The seasonal pattern of road traffic mortality data was statistically significant in China. SARIMA (1, 1, 1) (0, 1, 1)12 model was the best fitting model among various candidate models; the Akaike Information Criterion, Bayesian Information Criterion, and mean absolute percentage error were -483.679, -475.053, and 4.937, respectively. Goodness-of-fit testing showed nonautocorrelations in the residuals of the model (Ljung-Box test, Q = 4.86, P = .993). The fitted deaths using the SARIMA (1, 1, 1) (0, 1, 1)12 model for years 2000 to 2011 closely followed the observed number of road traffic deaths for the same years. The predicted and observed deaths were also very close for 2012. This study suggests that accurate forecasting of road traffic death incidence is possible using SARIMA model. The SARIMA model applied to historical road traffic deaths data could provide important evidence of burden of road traffic injuries in China. Copyright © 2015 Elsevier Inc. All rights reserved.
De Croon, Einar M; Blonk, Roland W B; Sluiter, Judith K; Frings-Dresen, Monique H W
2005-02-01
Monitoring psychological job strain may help occupational physicians to take preventive action at the appropriate time. For this purpose, the 10-item trucker strain monitor (TSM) assessing work-related fatigue and sleeping problems in truck drivers was developed. This study examined (1) test-retest reliability, (2) criterion validity of the TSM with respect to future sickness absence due to psychological health complaints and (3) usefulness of the TSM two-scales structure. The TSM and self-administered questionnaires, providing information about stressful working conditions (job control and job demands) and sickness absence, were sent to a random sample of 2000 drivers in 1998. Of the 1123 responders, 820 returned a completed questionnaire 2 years later (response: 72%). The TSM work-related fatigue scale, the TSM sleeping problems scale and the TSM composite scale showed satisfactory 2-year test-retest reliability (coefficient r=0.62, 0.66 and 0.67, respectively). The work-related fatigue, sleeping problems scale and composite scale had sensitivities of 61, 65 and 61%, respectively in identifying drivers with future sickness absence due to psychological health complaints. The specificity and positive predictive value of the TSM composite scale were 77 and 11%, respectively. The work-related fatigue scale and the sleeping problems scale were moderately strong correlated (r=0.62). However, stressful working conditions were differentially associated with the two scales. The results support the test-retest reliability, criterion validity and two-factor structure of the TSM. In general, the results suggest that the use of occupation-specific psychological job strain questionnaires is fruitful.
Mayorga-Vega, Daniel; Merino-Marban, Rafael; Viciana, Jesús
2014-01-01
The main purpose of the present meta-analysis was to examine the scientific literature on the criterion-related validity of sit-and-reach tests for estimating hamstring and lumbar extensibility. For this purpose relevant studies were searched from seven electronic databases dated up through December 2012. Primary outcomes of criterion-related validity were Pearson´s zero-order correlation coefficients (r) between sit-and-reach tests and hamstrings and/or lumbar extensibility criterion measures. Then, from the included studies, the Hunter- Schmidt´s psychometric meta-analysis approach was conducted to estimate population criterion- related validity of sit-and-reach tests. Firstly, the corrected correlation mean (rp), unaffected by statistical artefacts (i.e., sampling error and measurement error), was calculated separately for each sit-and-reach test. Subsequently, the three potential moderator variables (sex of participants, age of participants, and level of hamstring extensibility) were examined by a partially hierarchical analysis. Of the 34 studies included in the present meta-analysis, 99 correlations values across eight sit-and-reach tests and 51 across seven sit-and-reach tests were retrieved for hamstring and lumbar extensibility, respectively. The overall results showed that all sit-and-reach tests had a moderate mean criterion-related validity for estimating hamstring extensibility (rp = 0.46-0.67), but they had a low mean for estimating lumbar extensibility (rp = 0. 16-0.35). Generally, females, adults and participants with high levels of hamstring extensibility tended to have greater mean values of criterion-related validity for estimating hamstring extensibility. When the use of angular tests is limited such as in a school setting or in large scale studies, scientists and practitioners could use the sit-and-reach tests as a useful alternative for hamstring extensibility estimation, but not for estimating lumbar extensibility. Key Points Overall sit-and-reach tests have a moderate mean criterion-related validity for estimating hamstring extensibility, but they have a low mean validity for estimating lumbar extensibility. Among all the sit-and-reach test protocols, the Classic sit-and-reach test seems to be the best option to estimate hamstring extensibility. End scores (e.g., the Classic sit-and-reach test) are a better indicator of hamstring extensibility than the modifications that incorporate fingers-to-box distance (e.g., the Modified sit-and-reach test). When angular tests such as straight leg raise or knee extension tests cannot be used, sit-and-reach tests seem to be a useful field test alternative to estimate hamstring extensibility, but not to estimate lumbar extensibility. PMID:24570599
de Geus, Eveline; Aalfs, Cora M; Menko, Fred H; Sijmons, Rolf H; Verdam, Mathilde G E; de Haes, Hanneke C J M; Smets, Ellen M A
2015-08-01
Despite the use of genetic services, counselees do not always share hereditary cancer information with at-risk relatives. Reasons for not informing relatives may be categorized as a lack of: knowledge, motivation, and/or self-efficacy. This study aims to develop and test the psychometric properties of the Informing Relatives Inventory, a battery of instruments that intend to measure counselees' knowledge, motivation, and self-efficacy regarding the disclosure of hereditary cancer risk information to at-risk relatives. Guided by the proposed conceptual framework, existing instruments were selected and new instruments were developed. We tested the instruments' acceptability, dimensionality, reliability, and criterion-related validity in consecutive index patients visiting the Clinical Genetics department with questions regarding hereditary breast and/or ovarian cancer or colon cancer. Data of 211 index patients were included (response rate = 62%). The Informing Relatives Inventory (IRI) assesses three barriers in disclosure representing seven domains. Instruments assessing index patients' (positive) motivation and self-efficacy were acceptable and reliable and suggested good criterion-related validity. Psychometric properties of instruments assessing index patients knowledge were disputable. These items were moderately accepted by index patients and the criterion-related validity was weaker. This study presents a first conceptual framework and associated inventory (IRI) that improves insight into index patients' barriers regarding the disclosure of genetic cancer information to at-risk relatives. Instruments assessing (positive) motivation and self-efficacy proved to be reliable measurements. Measuring index patients knowledge appeared to be more challenging. Further research is necessary to ensure IRI's dimensionality and sensitivity to change.
NASA Technical Reports Server (NTRS)
Stothers, Richard B.; Chin, Chao-wen
1999-01-01
Interior layers of stars that have been exposed by surface mass loss reveal aspects of their chemical and convective histories that are otherwise inaccessible to observation. It must be significant that the surface hydrogen abundances of luminous blue variables (LBVs) show a remarkable uniformity, specifically X(sub surf) = 0.3 - 0.4, while those of hydrogen-poor Wolf-Rayet (WN) stars fall, almost without exception, below these values, ranging down to X(sub surf) = 0. According to our stellar model calculations, most LBVs are post-red-supergiant objects in a late blue phase of dynamical instability, and most hydrogen-poor WN stars are their immediate descendants. If this is so, stellar models constructed with the Schwarzschild (temperature-gradient) criterion for convection account well for the observed hydrogen abundances, whereas models built with the Ledoux (density-gradient) criterion fail. At the brightest luminosities, the observed hydrogen abundances of LBVs are too large to be explained by any of our highly evolved stellar models, but these LBVs may occupy transient blue loops that exist during an earlier phase of dynamical instability when the star first becomes a yellow supergiant. Independent evidence concerning the criterion for convection, which is based mostly on traditional color distributions of less massive supergiants on the Hertzsprung-Russell diagram, tends to favor the Ledoux criterion. It is quite possible that the true criterion for convection changes over from something like the Ledoux criterion to something like the Schwarzschild criterion as the stellar mass increases.
Statistically Based Approach to Broadband Liner Design and Assessment
NASA Technical Reports Server (NTRS)
Jones, Michael G. (Inventor); Nark, Douglas M. (Inventor)
2016-01-01
A broadband liner design optimization includes utilizing in-duct attenuation predictions with a statistical fan source model to obtain optimum impedance spectra over a number of flow conditions for one or more liner locations in a bypass duct. The predicted optimum impedance information is then used with acoustic liner modeling tools to design liners having impedance spectra that most closely match the predicted optimum values. Design selection is based on an acceptance criterion that provides the ability to apply increasing weighting to specific frequencies and/or operating conditions. One or more broadband design approaches are utilized to produce a broadband liner that targets a full range of frequencies and operating conditions.
Cost decomposition of linear systems with application to model reduction
NASA Technical Reports Server (NTRS)
Skelton, R. E.
1980-01-01
A means is provided to assess the value or 'cst' of each component of a large scale system, when the total cost is a quadratic function. Such a 'cost decomposition' of the system has several important uses. When the components represent physical subsystems which can fail, the 'component cost' is useful in failure mode analysis. When the components represent mathematical equations which may be truncated, the 'component cost' becomes a criterion for model truncation. In this latter event component costs provide a mechanism by which the specific control objectives dictate which components should be retained in the model reduction process. This information can be valuable in model reduction and decentralized control problems.
Multipartite nonlocality distillation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hsu, Li-Yi; Wu, Keng-Shuo
2010-11-15
The stronger nonlocality than that allowed in quantum theory can provide an advantage in information processing and computation. Since quantum entanglement is distillable, can nonlocality be distilled in the nonsignalling condition? The answer is positive in the bipartite case. In this article the distillability of the multipartite nonlocality is investigated. We propose a distillation protocol solely exploiting xor operations on output bits. The probability-distribution vectors and matrix are introduced to tackle the correlators. It is shown that only the correlators with extreme values can survive the distillation process. As the main result, the amplified nonlocality cannot maximally violate any Bell-typemore » inequality. Accordingly, a distillability criterion in the postquantum region is proposed.« less
Deng, Jingyu; Liang, Han; Dong, Qiuping; Hou, Yachao; Xie, Xingming; Yu, Jun; Fan, Daiming; Hao, Xishan
2014-07-01
The methylation of B-cell CLL/lymphoma 6 member B (BCL6B) DNA promoter was detected in several malignancies. Here, we quantitatively detect the methylated status of CpG sites of BCL6B DNA promoter of 459 patients with gastric cancer (GC) by using bisulfite gene sequencing. We show that patients with three or more methylated CpG sites in the BCL6B promoter were significantly associated with poor survival. Furthermore, by using the Akaike information criterion value calculation, we show that the methylated count of BCL6B promoter was identified to be the optimal prognostic predictor of GC patients.
NASA Astrophysics Data System (ADS)
Lian, J.; Ahn, D. C.; Chae, D. C.; Münstermann, S.; Bleck, W.
2016-08-01
Experimental and numerical investigations on the characterisation and prediction of cold formability of a ferritic steel sheet are performed in this study. Tensile tests and Nakajima tests were performed for the plasticity characterisation and the forming limit diagram determination. In the numerical prediction, the modified maximum force criterion is selected as the localisation criterion. For the plasticity model, a non-associated formulation of the Hill48 model is employed. With the non-associated flow rule, the model can result in a similar predictive capability of stress and r-value directionality to the advanced non-quadratic associated models. To accurately characterise the anisotropy evolution during hardening, the anisotropic hardening is also calibrated and implemented into the model for the prediction of the formability.
Uncertainty, robustness, and the value of information in managing a population of northern bobwhites
Johnson, Fred A.; Hagan, Greg; Palmer, William E.; Kemmerer, Michael
2014-01-01
The abundance of northern bobwhites (Colinus virginianus) has decreased throughout their range. Managers often respond by considering improvements in harvest and habitat management practices, but this can be challenging if substantial uncertainty exists concerning the cause(s) of the decline. We were interested in how application of decision science could be used to help managers on a large, public management area in southwestern Florida where the bobwhite is a featured species and where abundance has severely declined. We conducted a workshop with managers and scientists to elicit management objectives, alternative hypotheses concerning population limitation in bobwhites, potential management actions, and predicted management outcomes. Using standard and robust approaches to decision making, we determined that improved water management and perhaps some changes in hunting practices would be expected to produce the best management outcomes in the face of uncertainty about what is limiting bobwhite abundance. We used a criterion called the expected value of perfect information to determine that a robust management strategy may perform nearly as well as an optimal management strategy (i.e., a strategy that is expected to perform best, given the relative importance of different management objectives) with all uncertainty resolved. We used the expected value of partial information to determine that management performance could be increased most by eliminating uncertainty over excessive-harvest and human-disturbance hypotheses. Beyond learning about the factors limiting bobwhites, adoption of a dynamic management strategy, which recognizes temporal changes in resource and environmental conditions, might produce the greatest management benefit. Our research demonstrates that robust approaches to decision making, combined with estimates of the value of information, can offer considerable insight into preferred management approaches when great uncertainty exists about system dynamics and the effects of management.
Bayesian model checking: A comparison of tests
NASA Astrophysics Data System (ADS)
Lucy, L. B.
2018-06-01
Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.
The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures
NASA Astrophysics Data System (ADS)
Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan
2016-09-01
Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H2) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H2/CH4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately larger than 40 for abiotic methane and values of <40 for biotic methane. The definition of the criterion was based on two serpentinization experiments at 200 °C and 0.3 kbar. However, it is not clear whether the criterion is applicable at a wider range of temperatures. In this study, we performed sixteen experiments at 311-500 °C and 3.0 kbar using natural ground peridotite. Our results demonstrate that the H2/CH4 ratios strongly depend on temperature. At 311 °C and 3.0 kbar, the H2/CH4 ratios ranged from 58 to 2,120, much greater than the critical value of 40. By contrast, at 400-500 °C, the H2/CH4 ratios were much lower, ranging from 0.1 to 8.2. The results of this study suggest that the H2/CH4 ratios cannot reliably discriminate abiotic from biotic methane.
Generalized Bohm’s criterion and negative anode voltage fall in electric discharges
DOE Office of Scientific and Technical Information (OSTI.GOV)
Londer, Ya. I.; Ul’yanov, K. N., E-mail: kulyanov@vei.ru
2013-10-15
The value of the voltage fall across the anode sheath is found as a function of the current density. Analytic solutions are obtained in a wide range of the ratio of the directed velocity of plasma electrons v{sub 0} to their thermal velocity v{sub T}. It is shown that the voltage fall in a one-dimensional collisionless anode sheath is always negative. At the small values of v{sub 0}/v{sub T}, the obtained expression asymptotically transforms into the Langmuir formula. Generalized Bohm’s criterion for an electric discharge with allowance for the space charge density ρ(0), electric field E(0), ion velocity v{sub i}(0),more » and ratio v{sub 0}/v{sub T} at the plasma-sheath interface is formulated. It is shown that the minimum value of the ion velocity v{sub i}{sup *}(0) corresponds to the vanishing of the electric field at one point inside the sheath. The dependence of v{sub i}{sup *} (0) on ρ(0), E(0), and v{sub 0}/v{sub T} determines the boundary of the existence domain of stationary solutions in the sheath. Using this criterion, the maximum possible degree of contraction of the electron current at the anode is determined for a short high-current vacuum arc discharge.« less
The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures.
Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan
2016-09-26
Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H 2 ) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H 2 /CH 4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately larger than 40 for abiotic methane and values of <40 for biotic methane. The definition of the criterion was based on two serpentinization experiments at 200 °C and 0.3 kbar. However, it is not clear whether the criterion is applicable at a wider range of temperatures. In this study, we performed sixteen experiments at 311-500 °C and 3.0 kbar using natural ground peridotite. Our results demonstrate that the H 2 /CH 4 ratios strongly depend on temperature. At 311 °C and 3.0 kbar, the H 2 /CH 4 ratios ranged from 58 to 2,120, much greater than the critical value of 40. By contrast, at 400-500 °C, the H 2 /CH 4 ratios were much lower, ranging from 0.1 to 8.2. The results of this study suggest that the H 2 /CH 4 ratios cannot reliably discriminate abiotic from biotic methane.
NASA Astrophysics Data System (ADS)
Avercheva, O. V.; Berkovich, Yu. A.; Konovalova, I. O.; Radchenko, S. G.; Lapach, S. N.; Bassarskaya, E. M.; Kochetova, G. V.; Zhigalova, T. V.; Yakovleva, O. S.; Tarakanov, I. G.
2016-11-01
The aim of this work were to choose a quantitative optimality criterion for estimating the quality of plant LED lighting regimes inside space greenhouses and to construct regression models of crop productivity and the optimality criterion depending on the level of photosynthetic photon flux density (PPFD), the proportion of the red component in the light spectrum and the duration of the duty cycle (Chinese cabbage Brassica chinensis L. as an example). The properties of the obtained models were described in the context of predicting crop dry weight and the optimality criterion behavior when varying plant lighting parameters. Results of the fractional 3-factor experiment demonstrated the share of the PPFD level participation in the crop dry weight accumulation was 84.4% at almost any combination of other lighting parameters, but when PPFD value increased up to 500 μmol m-2 s-1 the pulse light and supplemental light from red LEDs could additionally increase crop productivity. Analysis of the optimality criterion response to variation of lighting parameters showed that the maximum coordinates were the following: PPFD = 500 μmol m-2 s-1, about 70%-proportion of the red component of the light spectrum (PPFDLEDred/PPFDLEDwhite = 1.5) and the duty cycle with a period of 501 μs. Thus, LED crop lighting with these parameters was optimal for achieving high crop productivity and for efficient use of energy in the given range of lighting parameter values.
A new tracer‐density criterion for heterogeneous porous media
Barth, Gilbert R.; Illangasekare, Tissa H.; Hill, Mary C.; Rajaram, Harihar
2001-01-01
Tracer experiments provide information about aquifer material properties vital for accurate site characterization. Unfortunately, density‐induced sinking can distort tracer movement, leading to an inaccurate assessment of material properties. Yet existing criteria for selecting appropriate tracer concentrations are based on analysis of homogeneous media instead of media with heterogeneities typical of field sites. This work introduces a hydraulic‐gradient correction for heterogeneous media and applies it to a criterion previously used to indicate density‐induced instabilities in homogeneous media. The modified criterion was tested using a series of two‐dimensional heterogeneous intermediate‐scale tracer experiments and data from several detailed field tracer tests. The intermediate‐scale experimental facility (10.0×1.2×0.06 m) included both homogeneous and heterogeneous (σln k2 = 1.22) zones. The field tracer tests were less heterogeneous (0.24 < σln k2 < 0.37), but measurements were sufficient to detect density‐induced sinking. Evaluation of the modified criterion using the experiments and field tests demonstrates that the new criterion appears to account for the change in density‐induced sinking due to heterogeneity. The criterion demonstrates the importance of accounting for heterogeneity to predict density‐induced sinking and differences in the onset of density‐induced sinking in two‐ and three‐dimensional systems.
Evaluation of volatile organic emissions from hazardous waste incinerators.
Sedman, R M; Esparza, J R
1991-01-01
Conventional methods of risk assessment typically employed to evaluate the impact of hazardous waste incinerators on public health must rely on somewhat speculative emissions estimates or on complicated and expensive sampling and analytical methods. The limited amount of toxicological information concerning many of the compounds detected in stack emissions also complicates the evaluation of the public health impacts of these facilities. An alternative approach aimed at evaluating the public health impacts associated with volatile organic stack emissions is presented that relies on a screening criterion to evaluate total stack hydrocarbon emissions. If the concentration of hydrocarbons in ambient air is below the screening criterion, volatile emissions from the incinerator are judged not to pose a significant threat to public health. Both the screening criterion and a conventional method of risk assessment were employed to evaluate the emissions from 20 incinerators. Use of the screening criterion always yielded a substantially greater estimate of risk than that derived by the conventional method. Since the use of the screening criterion always yielded estimates of risk that were greater than that determined by conventional methods and measuring total hydrocarbon emissions is a relatively simple analytical procedure, the use of the screening criterion would appear to facilitate the evaluation of operating hazardous waste incinerators. PMID:1954928
Contribution of criterion A2 to PTSD screening in the presence of traumatic events.
Pereda, Noemí; Forero, Carlos G
2012-10-01
Criterion A2 according to the Diagnostic and Statistical Manual of Mental Disorders (4(th) ed.; DSM-IV; American Psychiatric Association [APA], 1994) for posttraumatic stress disorder (PTSD) aims to assess the individual's subjective appraisal of an event, but it has been claimed that it might not be sufficiently specific for diagnostic purposes. We analyse the contribution of Criterion A2 and DSM-IV criteria to detect PTSD for the most distressing life events experienced by our subjects. Young adults (N = 1,033) reported their most distressing life events, together with PTSD criteria (Criteria A2, B, C, D, E, and F). PTSD prevalence and criterion specificity and agreement with probable diagnoses were estimated. Our results indicate 80.30% of the individuals experienced traumatic events and met one or more PTSD criteria; 13.22% cases received a positive diagnosis of PTSD. Criterion A2 showed poor agreement with the final probable PTSD diagnosis (correlation with PTSD .13, specificity = .10); excluding it from PTSD diagnosis did not the change the estimated disorder prevalence significantly. Based on these findings it appears that Criterion A2 is scarcely specific and provides little information to confirm a probable PTSD case. Copyright © 2012 International Society for Traumatic Stress Studies.
Patel, Nitin R; Ankolekar, Suresh; Antonijevic, Zoran; Rajicic, Natasa
2013-05-10
We describe a value-driven approach to optimizing pharmaceutical portfolios. Our approach incorporates inputs from research and development and commercial functions by simultaneously addressing internal and external factors. This approach differentiates itself from current practices in that it recognizes the impact of study design parameters, sample size in particular, on the portfolio value. We develop an integer programming (IP) model as the basis for Bayesian decision analysis to optimize phase 3 development portfolios using expected net present value as the criterion. We show how this framework can be used to determine optimal sample sizes and trial schedules to maximize the value of a portfolio under budget constraints. We then illustrate the remarkable flexibility of the IP model to answer a variety of 'what-if' questions that reflect situations that arise in practice. We extend the IP model to a stochastic IP model to incorporate uncertainty in the availability of drugs from earlier development phases for phase 3 development in the future. We show how to use stochastic IP to re-optimize the portfolio development strategy over time as new information accumulates and budget changes occur. Copyright © 2013 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Diamant, Idit; Shalhon, Moran; Goldberger, Jacob; Greenspan, Hayit
2016-03-01
Classification of clustered breast microcalcifications into benign and malignant categories is an extremely challenging task for computerized algorithms and expert radiologists alike. In this paper we present a novel method for feature selection based on mutual information (MI) criterion for automatic classification of microcalcifications. We explored the MI based feature selection for various texture features. The proposed method was evaluated on a standardized digital database for screening mammography (DDSM). Experimental results demonstrate the effectiveness and the advantage of using the MI-based feature selection to obtain the most relevant features for the task and thus to provide for improved performance as compared to using all features.
Adachi, Yasumoto; Makita, Kohei
2015-09-01
Mycobacteriosis in swine is a common zoonosis found in abattoirs during meat inspections, and the veterinary authority is expected to inform the producer for corrective actions when an outbreak is detected. The expected value of the number of condemned carcasses due to mycobacteriosis therefore would be a useful threshold to detect an outbreak, and the present study aims to develop such an expected value through time series modeling. The model was developed using eight years of inspection data (2003 to 2010) obtained at 2 abattoirs of the Higashi-Mokoto Meat Inspection Center, Japan. The resulting model was validated by comparing the predicted time-dependent values for the subsequent 2 years with the actual data for 2 years between 2011 and 2012. For the modeling, at first, periodicities were checked using Fast Fourier Transformation, and the ensemble average profiles for weekly periodicities were calculated. An Auto-Regressive Integrated Moving Average (ARIMA) model was fitted to the residual of the ensemble average on the basis of minimum Akaike's information criterion (AIC). The sum of the ARIMA model and the weekly ensemble average was regarded as the time-dependent expected value. During 2011 and 2012, the number of whole or partial condemned carcasses exceeded the 95% confidence interval of the predicted values 20 times. All of these events were associated with the slaughtering of pigs from three producers with the highest rate of condemnation due to mycobacteriosis.
NASA Astrophysics Data System (ADS)
Kos, L.; Jelić, N.; Kuhn, S.; Tskhakaya, D. D.
2018-04-01
At present, identifying and characterizing the common plasma-sheath edge (PSE) in the conventional fluid approach leads to intrinsic oversimplifications, while the kinetic one results in unusable over-generalizations. In addition, none of these approaches can be justified in realistic plasmas, i.e., those which are characterized by non-negligible Debye lengths and a well-defined non-negligible ion temperature. In an attempt to resolve this problem, we propose a new formulation of the Bohm criterion [D. Bohm, The Characteristics of Electrical Discharges in Magnetic Fields (McGraw-Hill, New York, 1949)], which is here expressed in terms of fluid, kinetic, and electrostatic-pressure contributions. This "unified" Bohm criterion consists of a set of two equations for calculating the ion directional energy (i.e., the mean directional velocity) and the plasma potential at the common PSE, and is valid for arbitrary ion-to-electron temperature ratios. It turns out to be exact at any point of the quasi-neutral plasma provided that the ion differential polytropic coefficient function (DPCF) of Kuhn et al. [Phys. Plasmas 13, 013503 (2006)] is employed, with the advantage that the DPCF is an easily measurable fluid quantity. Moreover, our unified Bohm criterion holds in plasmas with finite Debye lengths, for which the famous kinetic criterion formulated by Harrison and Thompson [Proc. Phys. Soc. 74, 145 (1959)] fails. Unlike the kinetic criterion in the case of negligible Debye length, the kinetic contribution to the unified Bohm criterion, arising due to the presence of negative and zero velocities in the ion velocity distribution function, can be calculated separately from the fluid term. This kinetic contribution disappears identically at the PSE, yielding strict equality of the ion directional velocity there and the ion sound speed, provided that the latter is formulated in terms of the present definition of DPCFs. The numerical values of these velocities are found for the Tonks-Langmuir collision-free, plane-parallel discharge model [Phys. Rev. 34, 876 (1929)], however, with the ion-source temperature extended here from the original (zero) value to arbitrary high ones. In addition, it turns out, that the charge-density derivative (in the potential "space") with respect to the potential exhibits two characteristic points, i.e., potentials, namely the points of inflection and maximum of that derivative (in the potential space), which stay "fixed" at their respective potentials independent of the Debye length until it is kept fairly small. Plasma quasi-neutrality appears well satisfied up to the first characteristic point/potential, so we identify that one as the plasma edge (PE). Adopting the convention that the sheath is a region characterized by considerable electrostatic pressure (energy density), we identify the second characteristic point/potential as the sheath edge (SE). Between these points, the charge density increases from zero to a finite value. Thus, the interval between the PE and SE, with the "fixed" width (in the potential "space") of about one third of the electron temperature, will be named the plasma-sheath transition (PST). Outside the PST, the electrostatic-pressure term and its derivatives turn out to be nearly identical with each other, independent of the particular values of the ion temperature and Debye length. In contrast, an increase in Debye lengths from zero to finite values causes the location of the sonic point/potential (laying inside the PST) to shift from the PE (for vanishing Debye length) towards the SE, while at the same time, the absolute value of the corresponding ion-sound velocity slightly decreases. These shifts turn out to be manageable with employing the mathematical concept of the plasma-to-sheath transition (different from, but related to our natural PST concept), resulting in approximate, but sufficiently reliable semi-analytic expressions, which are functions of the ion temperature and Debye length.
Enormous knowledge base of disease diagnosis criteria.
Xiao, Z H; Xiao, Y H; Pei, J H
1995-01-01
One of the problems in the development of the medical knowledge systems is the limitations of the system's knowledge. It is a common expectation to increase the number of diseases contained in a system. Using a high density knowledge representation method designed by us, we have developed the Enormous Knowledge Base of Disease Diagnosis Criteria (EKBDDC). It contains diagnostic criteria of 1,001 diagnostic entities and describes nearly 4,000 items of diagnostic indicators. It is the core of a huge medical project--the Electronic-Brain Medical Erudite (EBME). This enormous knowledge base was implemented initially on a low-cost popular microcomputer, which can aid in the prompting of typical disease and in teaching of diagnosis. The knowledge base is easy to expand. One of the main goals of EKBDDC is to increase the number of diseases included in it as far as possible using a low-cost computer with a comparatively small storage capacity. For this, we have designed a high density knowledge representation method. Criteria of various diagnostic entities are respectively stored in different records of the knowledge base. Each diagnostic entity corresponds to a diagnostic criterion data set; each data set consists of some diagnostic criterion data values (Table 1); each data is composed of two parts: integer and decimal; the integral part is the coding number of the given diagnostic information, and the decimal part is the diagnostic value of this information to the disease indicated by corresponding record number. For example, 75.02: the integer 75 is the coding number of "hemorrhagic skin rash"; the decimal 0.02 is the diagnostic value of this manifestation for diagnosing allergic purpura. TABULAR DATA, SEE PUBLISHED ABSTRACT. The algebraic sum method, a special form of the weighted summation, is adopted as mathematical model. In EKBDDC, the diagnostic values, which represent the significance of the disease manifestations for diagnosing corresponding diseases, were determined empirically. It is of a great economical, practical, and technical significance to realize enormous knowledge bases of disease diagnosis criteria on a low-cost popular microcomputer. This is beneficial for the developing countries to popularize medical informatics. To create the enormous international computer-aided diagnosis system, one may jointly develop the unified modules of disease diagnosis criteria used to "inlay" relevant computer-aided diagnosis systems. It is just like assembling a house using prefabricated panels.
Gelau, Christhard; Henning, Matthias J; Krems, Josef F
2009-03-01
In recent years considerable efforts have been spent on the development of the occlusion technique as a procedure for the assessment of the human-machine interface of in-vehicle information and communication systems (IVIS) designed to be used by the driver while driving. The importance and significance of the findings resulting from the application of this procedure depends essentially on its reliability. Because there is a lack of evidence as to whether this basic criterion of measurement is met with this procedure, and because questionable reliability can lead to doubts about their validity, our project strove to clarify this issue. This paper reports on a statistical reanalysis of data obtained from previous experiments. To summarise, the characteristic values found for internal consistency were almost all in the range of .90 for the occlusion technique, which can be considered satisfactory.
NASA Astrophysics Data System (ADS)
Shahzad, M.; Rizvi, H.; Panwar, A.; Ryu, C. M.
2017-06-01
We have re-visited the existence criterion of the reverse shear Alfven eigenmodes (RSAEs) in the presence of the parallel equilibrium current by numerically solving the eigenvalue equation using a fast eigenvalue solver code KAES. The parallel equilibrium current can bring in the kink effect and is known to be strongly unfavorable for the RSAE. We have numerically estimated the critical value of the toroidicity factor Qtor in a circular tokamak plasma, above which RSAEs can exist, and compared it to the analytical one. The difference between the numerical and analytical critical values is small for low frequency RSAEs, but it increases as the frequency of the mode increases, becoming greater for higher poloidal harmonic modes.
Parametric optimal control of uncertain systems under an optimistic value criterion
NASA Astrophysics Data System (ADS)
Li, Bo; Zhu, Yuanguo
2018-01-01
It is well known that the optimal control of a linear quadratic model is characterized by the solution of a Riccati differential equation. In many cases, the corresponding Riccati differential equation cannot be solved exactly such that the optimal feedback control may be a complex time-oriented function. In this article, a parametric optimal control problem of an uncertain linear quadratic model under an optimistic value criterion is considered for simplifying the expression of optimal control. Based on the equation of optimality for the uncertain optimal control problem, an approximation method is presented to solve it. As an application, a two-spool turbofan engine optimal control problem is given to show the utility of the proposed model and the efficiency of the presented approximation method.
Furuhama, A; Hasunuma, K; Aoki, Y
2015-01-01
In addition to molecular structure profiles, descriptors based on physicochemical properties are useful for explaining the eco-toxicities of chemicals. In a previous study we reported that a criterion based on the difference between the partition coefficient (log POW) and distribution coefficient (log D) values of chemicals enabled us to identify aromatic amines and phenols for which interspecies relationships with strong correlations could be developed for fish-daphnid and algal-daphnid toxicities. The chemicals that met the log D-based criterion were expected to have similar toxicity mechanisms (related to membrane penetration). Here, we investigated the applicability of log D-based criteria to the eco-toxicity of other kinds of chemicals, including aliphatic compounds. At pH 10, use of a log POW - log D > 0 criterion and omission of outliers resulted in the selection of more than 100 chemicals whose acute fish toxicities or algal growth inhibition toxicities were almost equal to their acute daphnid toxicities. The advantage of log D-based criteria is that they allow for simple, rapid screening and prioritizing of chemicals. However, inorganic molecules and chemicals containing certain structural elements cannot be evaluated, because calculated log D values are unavailable.
Information presentation format moderates the unconscious-thought effect: The role of recollection.
Abadie, Marlène; Waroquier, Laurent; Terrier, Patrice
2016-09-01
The unconscious-thought effect occurs when distraction improves complex decision-making. In two experiments using the unconscious-thought paradigm, we investigated the effect of presentation format of decision information (i) on memory for decision-relevant information and (ii) on the quality of decisions made after distraction, conscious deliberation or immediately. We used the process-dissociation procedure to measure recollection and familiarity. The two studies showed that presenting information blocked per criterion led participants to recollect more decision-relevant details compared to a presentation by option. Moreover, a Bayesian meta-analysis of the two studies provided strong evidence that conscious deliberation resulted in better decisions when the information was presented blocked per criterion and substantial evidence that distraction improved decision quality when the information was presented blocked per option. Finally, Study 2 revealed that the recollection of decision-relevant details mediated the effect of presentation format on decision quality in the deliberation condition. This suggests that recollection contributes to conscious deliberation efficacy.
Soft Clustering Criterion Functions for Partitional Document Clustering
2004-05-26
in the clus- ter that it already belongs to. The refinement phase ends, as soon as we perform an iteration in which no documents moved between...for failing to comply with a collection of information if it does not display a currently valid OMB control number. 1. REPORT DATE 26 MAY 2004 2... it with the one obtained by the hard criterion functions. We present a comprehensive experimental evaluation involving twelve differ- ent datasets
A globally optimal k-anonymity method for the de-identification of health data.
El Emam, Khaled; Dankar, Fida Kamal; Issa, Romeo; Jonker, Elizabeth; Amyot, Daniel; Cogo, Elise; Corriveau, Jean-Pierre; Walker, Mark; Chowdhury, Sadrul; Vaillancourt, Regis; Roffey, Tyson; Bottomley, Jim
2009-01-01
Explicit patient consent requirements in privacy laws can have a negative impact on health research, leading to selection bias and reduced recruitment. Often legislative requirements to obtain consent are waived if the information collected or disclosed is de-identified. The authors developed and empirically evaluated a new globally optimal de-identification algorithm that satisfies the k-anonymity criterion and that is suitable for health datasets. Authors compared OLA (Optimal Lattice Anonymization) empirically to three existing k-anonymity algorithms, Datafly, Samarati, and Incognito, on six public, hospital, and registry datasets for different values of k and suppression limits. Measurement Three information loss metrics were used for the comparison: precision, discernability metric, and non-uniform entropy. Each algorithm's performance speed was also evaluated. The Datafly and Samarati algorithms had higher information loss than OLA and Incognito; OLA was consistently faster than Incognito in finding the globally optimal de-identification solution. For the de-identification of health datasets, OLA is an improvement on existing k-anonymity algorithms in terms of information loss and performance.
Zeng, Qiang; Shi, Feina; Zhang, Jianmin; Ling, Chenhan; Dong, Fei; Jiang, Biao
2018-01-01
Purpose: To present a new modified tri-exponential model for diffusion-weighted imaging (DWI) to detect the strictly diffusion-limited compartment, and to compare it with the conventional bi- and tri-exponential models. Methods: Multi-b-value diffusion-weighted imaging (DWI) with 17 b-values up to 8,000 s/mm2 were performed on six volunteers. The corrected Akaike information criterions (AICc) and squared predicted errors (SPE) were calculated to compare these three models. Results: The mean f0 values were ranging 11.9–18.7% in white matter ROIs and 1.2–2.7% in gray matter ROIs. In all white matter ROIs: the AICcs of the modified tri-exponential model were the lowest (p < 0.05 for five ROIs), indicating the new model has the best fit among these models; the SPEs of the bi-exponential model were the highest (p < 0.05), suggesting the bi-exponential model is unable to predict the signal intensity at ultra-high b-value. The mean ADCvery−slow values were extremely low in white matter (1–7 × 10−6 mm2/s), but not in gray matter (251–445 × 10−6 mm2/s), indicating that the conventional tri-exponential model fails to represent a special compartment. Conclusions: The strictly diffusion-limited compartment may be an important component in white matter. The new model fits better than the other two models, and may provide additional information. PMID:29535599
One criterion on which chlorine treatment of water may be based is the concentration (C) in mg/l multiplied by the time (t) in min of exposure or Ct values. We compared different Ct values on waterborne pathogenic bacteria by cultural assay for viability and 2 assays that mea...
Neuropathological diagnostic criteria for Alzheimer's disease.
Murayama, Shigeo; Saito, Yuko
2004-09-01
Neuropathological diagnostic criteria for Alzheimer's disease (AD) are based on tau-related pathology: NFT or neuritic plaques (NP). The Consortium to Establish a Registry for Alzheimer's disease (CERAD) criterion evaluates the highest density of neocortical NP from 0 (none) to C (abundant). Clinical documentation of dementia and NP stage A in younger cases, B in young old cases and C in older cases fulfils the criterion of AD. The CERAD criterion is most frequently used in clinical outcome studies because of its inclusion of clinical information. Braak and Braak's criterion evaluates the density and distribution of NFT and classifies them into: I/II, entorhinal; III/IV, limbic; and V/VI, neocortical stage. These three stages correspond to normal cognition, cognitive impairment and dementia, respectively. As Braak's criterion is based on morphological evaluation of the brain alone, this criterion is usually adopted in the research setting. The National Institute for Aging and Ronald and Nancy Reagan Institute of the Alzheimer's Association criterion combines these two criteria and categorizes cases into NFT V/VI and NP C, NFT III/IV and NP B, and NFT I/II and NP A, corresponding to high, middle and low probability of AD, respectively. As most AD cases in the aged population are categorized into Braak tangle stage IV and CERAD stage C, the usefulness of this criterion has not yet been determined. The combination of Braak's NFT stage equal to or above IV and Braak's senile plaque Stage C provides, arguably, the highest sensitivity and specificity. In future, the criteria should include in vivo dynamic neuropathological data, including 3D MRI, PET scan and CSF biomarkers, as well as more sensitive and specific immunohistochemical and immunochemical grading of AD.
NASA Astrophysics Data System (ADS)
Perekhodtseva, Elvira V.
2010-05-01
Development of successful method of forecast of storm winds, including squalls and tornadoes, that often result in human and material losses, could allow one to take proper measures against destruction of buildings and to protect people. Well-in-advance successful forecast (from 12 hours to 48 hour) makes possible to reduce the losses. Prediction of the phenomena involved is a very difficult problem for synoptic till recently. The existing graphic and calculation methods still depend on subjective decision of an operator. Nowadays in Russia there is no hydrodynamic model for forecast of the maximal wind velocity V> 25m/c, hence the main tools of objective forecast are statistical methods using the dependence of the phenomena involved on a number of atmospheric parameters (predictors). . Statistical decisive rule of the alternative and probability forecast of these events was obtained in accordance with the concept of "perfect prognosis" using the data of objective analysis. For this purpose the different teaching samples of present and absent of this storm wind and rainfalls were automatically arranged that include the values of forty physically substantiated potential predictors. Then the empirical statistical method was used that involved diagonalization of the mean correlation matrix R of the predictors and extraction of diagonal blocks of strongly correlated predictors. Thus for these phenomena the most informative predictors were selected without loosing information. The statistical decisive rules for diagnosis and prognosis of the phenomena involved U(X) were calculated for choosing informative vector-predictor. We used the criterion of distance of Mahalanobis and criterion of minimum of entropy by Vapnik-Chervonenkis for the selection predictors. Successful development of hydrodynamic models for short-term forecast and improvement of 36-48h forecasts of pressure, temperature and others parameters allowed us to use the prognostic fields of those models for calculations of the discriminant functions in the nodes of the grid 75x75km and the values of probabilities P of dangerous wind and thus to get fully automated forecasts. . In order to apply the alternative forecast to European part of Russia and Europe the author proposes the empirical threshold values specified for this phenomenon and advance period 36 hours. According to the Pirsey-Obukhov criterion (T), the success of this hydrometeorological-statistical method of forecast of storm wind and tornadoes to 36 -48 hours ahead in the warm season for the territory of Europe part of Russia and Siberia is T = 1-a-b=0,54-0,78 after independent and author experiments during the period 2004-2009 years. A lot of examples of very successful forecasts are submitted at this report for the territory of Europe and Russia. The same decisive rules were applied to the forecast of these phenomena during cold period in 2009-2010 years too. On the first month of 2010 a lot of cases of storm wind with heavy snowfall were observed and were forecasting over the territory of France, Italy and Germany.
A simple criterion for determining the dynamical stability of three-body systems
NASA Technical Reports Server (NTRS)
Black, D. C.
1982-01-01
Coplanar, prograde three-body systems (TBS) are discussed, emphasizing the specification of general criteria for determining whether such systems are dynamically stable. It is shown that the Graziani-Black (1981) criteria provide a quantitatively accurate characterization of the onset of dynamic instability for values of the dimensionless mass ranging from one millionth to one million. Harrington's (1977) general criterion and the Graziani-Black criterion are compared with results from analytic work that spans a 12-orders-of-magnitude variation in the mass ratios of the TBS components. Comparison of the Graziani-Black criteria with data for eight well-studied triple-star systems indicates that the observed lower limit for the ratio of periastron distance of the tertiary orbit to the semimajor axis of the binary orbit is due to dynamical instability rather than to cosmogonic processes.
Renard, Bernhard Y.; Xu, Buote; Kirchner, Marc; Zickmann, Franziska; Winter, Dominic; Korten, Simone; Brattig, Norbert W.; Tzur, Amit; Hamprecht, Fred A.; Steen, Hanno
2012-01-01
Currently, the reliable identification of peptides and proteins is only feasible when thoroughly annotated sequence databases are available. Although sequencing capacities continue to grow, many organisms remain without reliable, fully annotated reference genomes required for proteomic analyses. Standard database search algorithms fail to identify peptides that are not exactly contained in a protein database. De novo searches are generally hindered by their restricted reliability, and current error-tolerant search strategies are limited by global, heuristic tradeoffs between database and spectral information. We propose a Bayesian information criterion-driven error-tolerant peptide search (BICEPS) and offer an open source implementation based on this statistical criterion to automatically balance the information of each single spectrum and the database, while limiting the run time. We show that BICEPS performs as well as current database search algorithms when such algorithms are applied to sequenced organisms, whereas BICEPS only uses a remotely related organism database. For instance, we use a chicken instead of a human database corresponding to an evolutionary distance of more than 300 million years (International Chicken Genome Sequencing Consortium (2004) Sequence and comparative analysis of the chicken genome provide unique perspectives on vertebrate evolution. Nature 432, 695–716). We demonstrate the successful application to cross-species proteomics with a 33% increase in the number of identified proteins for a filarial nematode sample of Litomosoides sigmodontis. PMID:22493179
Somma, Antonella; Borroni, Serena; Maffei, Cesare; Giarolli, Laura E; Markon, Kristian E; Krueger, Robert F; Fossati, Andrea
2017-10-01
In order to assess the reliability, factorial validity, and criterion validity of the Personality Inventory for DSM-5 (PID-5) among adolescents, 1,264 Italian high school students were administered the PID-5. Participants were also administered the Questionnaire on Relationships and Substance Use as a criterion measure. In the full sample, McDonald's ω values were adequate for the PID-5 scales (median ω = .85, SD = .06), except for Suspiciousness. However, all PID-5 scales showed average inter-item correlation values in the .20-.55 range. Exploratory structural equation modeling analyses provided moderate support for the a priori model of PID-5 trait scales. Ordinal logistic regression analyses showed that selected PID-5 trait scales predicted a significant, albeit moderate (Cox & Snell R 2 values ranged from .08 to .15, all ps < .001) amount of variance in Questionnaire on Relationships and Substance Use variables.
Corner-point criterion for assessing nonlinear image processing imagers
NASA Astrophysics Data System (ADS)
Landeau, Stéphane; Pigois, Laurent; Foing, Jean-Paul; Deshors, Gilles; Swiathy, Greggory
2017-10-01
Range performance modeling of optronics imagers attempts to characterize the ability to resolve details in the image. Today, digital image processing is systematically used in conjunction with the optoelectronic system to correct its defects or to exploit tiny detection signals to increase performance. In order to characterize these processing having adaptive and non-linear properties, it becomes necessary to stimulate the imagers with test patterns whose properties are similar to the actual scene image ones, in terms of dynamic range, contours, texture and singular points. This paper presents an approach based on a Corner-Point (CP) resolution criterion, derived from the Probability of Correct Resolution (PCR) of binary fractal patterns. The fundamental principle lies in the respectful perception of the CP direction of one pixel minority value among the majority value of a 2×2 pixels block. The evaluation procedure considers the actual image as its multi-resolution CP transformation, taking the role of Ground Truth (GT). After a spatial registration between the degraded image and the original one, the degradation is statistically measured by comparing the GT with the degraded image CP transformation, in terms of localized PCR at the region of interest. The paper defines this CP criterion and presents the developed evaluation techniques, such as the measurement of the number of CP resolved on the target, the transformation CP and its inverse transform that make it possible to reconstruct an image of the perceived CPs. Then, this criterion is compared with the standard Johnson criterion, in the case of a linear blur and noise degradation. The evaluation of an imaging system integrating an image display and a visual perception is considered, by proposing an analysis scheme combining two methods: a CP measurement for the highly non-linear part (imaging) with real signature test target and conventional methods for the more linear part (displaying). The application to color imaging is proposed, with a discussion about the choice of the working color space depending on the type of image enhancement processing used.
Differentiating the origin of outflow tract ventricular arrhythmia using a simple, novel approach.
Efimova, Elena; Dinov, Borislav; Acou, Willem-Jan; Schirripa, Valentina; Kornej, Jelena; Kosiuk, Jedrzej; Rolf, Sascha; Sommer, Philipp; Richter, Sergio; Bollmann, Andreas; Hindricks, Gerhard; Arya, Arash
2015-07-01
Numerous electrocardiographic (ECG) criteria have been proposed to identify localization of outflow tract ventricular arrhythmias (OT-VAs); however, in some cases, it is difficult to accurately localize the origin of OT-VA using the surface ECG. The purpose of this study was to assess a simple criterion for localization of OT-VAs during electrophysiology study. We measured the interval from the onset of the earliest QRS complex of premature ventricular contractions (PVCs) to the distal right ventricular apical signal (the QRS-RVA interval) in 66 patients (31 men aged 53.3 ± 14.0 years; right ventricular outflow tract [RVOT] origin in 37) referred for ablation of symptomatic outflow tract PVCs. We prospectively validated this criterion in 39 patients (22 men aged 52 ± 15 years; RVOT origin in 19). Compared with patients with RVOT PVCs, the QRS-RVA interval was significantly longer in patients with left ventricular outflow tract (LVOT) PVCs (70 ± 14 vs 33.4±10 ms, P < .001). Receiver operating characteristic analysis showed that a QRS-RVA interval ≥49 ms had sensitivity, specificity, and positive and negative predictive values of 100%, 94.6%, 93.5%, and 100%, respectively, for prediction of an LVOT origin. The same analysis in the validation cohort showed sensitivity, specificity, and positive and negative predictive values of 94.7%, 95%, 95%, and 94.7%, respectively. When these data were combined, a QRS-RVA interval ≥49 ms had sensitivity, specificity, and positive and negative predictive values of 98%, 94.6%, 94.1%, and 98.1%, respectively, for prediction of an LVOT origin. A QRS-RVA interval ≥49 ms suggests an LVOT origin. The QRS-RVA interval is a simple and accurate criterion for differentiating the origin of outflow tract arrhythmia during electrophysiology study; however, the accuracy of this criterion in identifying OT-VA from the right coronary cusp is limited. Copyright © 2015 Heart Rhythm Society. Published by Elsevier Inc. All rights reserved.
The double high tide at Port Ellen: Doodson's criterion revisited
NASA Astrophysics Data System (ADS)
Byrne, Hannah A. M.; Mattias Green, J. A.; Bowers, David G.
2017-07-01
Doodson proposed a minimum criterion to predict the occurrence of double high (or double low) waters when a higher-frequency tidal harmonic is added to the semi-diurnal tide. If the phasing of the harmonic is optimal, the condition for a double high water can be written bn2/a > 1 where b is the amplitude of the higher harmonic, a is the amplitude of the semi-diurnal tide, and n is the ratio of their frequencies. Here we expand this criterion to allow for (i) a phase difference ϕ between the semi-diurnal tide and the harmonic and (ii) the fact that the double high water will disappear in the event that b/a becomes large enough for the higher harmonic to be the dominant component of the tide. This can happen, for example, at places or times where the semi-diurnal tide is very small. The revised parameter is br2/a, where r is a number generally less than n, although equal to n when ϕ = 0. The theory predicts that a double high tide will form when this parameter exceeds 1 and then disappear when it exceeds a value of order n2 and the higher harmonic becomes dominant. We test these predictions against observations at Port Ellen in the Inner Hebrides of Scotland. For most of the data set, the largest harmonic of the semi-diurnal tide is the sixth diurnal component, for which n = 3. The principal lunar and solar semi-diurnal tides are about equal at Port Ellen and so the semi-diurnal tide becomes very small twice a month at neap tides (here defined as the smallest fortnightly tidal range). A double high water forms when br2/a first exceeds a minimum value of about 1.5 as neap tides are approached and then disappears as br2/a then exceeds a second limiting value of about 10 at neap tides in agreement with the revised criterion.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2016-03-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, as shown successfully with light fields. Here, we report on the production of massive particles which meet the EPR criterion for continuous phase/amplitude variables. The created quantum state of ultracold atoms shows an EPR parameter of 0.18(3), which is 2.4 standard deviations below the threshold of 1/4. Our state presents a resource for tests of quantum nonlocality with massive particles and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Medical privacy protection based on granular computing.
Wang, Da-Wei; Liau, Churn-Jung; Hsu, Tsan-Sheng
2004-10-01
Based on granular computing methodology, we propose two criteria to quantitatively measure privacy invasion. The total cost criterion measures the effort needed for a data recipient to find private information. The average benefit criterion measures the benefit a data recipient obtains when he received the released data. These two criteria remedy the inadequacy of the deterministic privacy formulation proposed in Proceedings of Asia Pacific Medical Informatics Conference, 2000; Int J Med Inform 2003;71:17-23. Granular computing methodology provides a unified framework for these quantitative measurements and previous bin size and logical approaches. These two new criteria are implemented in a prototype system Cellsecu 2.0. Preliminary system performance evaluation is conducted and reviewed.
ERIC Educational Resources Information Center
Xi, Xiaoming
2008-01-01
Although the primary use of the speaking section of the Test of English as a Foreign Language™ Internet-based test (TOEFL® iBT Speaking test) is to inform admissions decisions at English medium universities, it may also be useful as an initial screening measure for international teaching assistants (ITAs). This study provides criterion-related…
Importance of Tensile Strength on the Shear Behavior of Discontinuities
NASA Astrophysics Data System (ADS)
Ghazvinian, A. H.; Azinfar, M. J.; Geranmayeh Vaneghi, R.
2012-05-01
In this study, the shear behavior of discontinuities possessing two different rock wall types with distinct separate compressive strengths was investigated. The designed profiles consisted of regular artificial joints molded by five types of plaster mortars, each representing a distinct uniaxial compressive strength. The compressive strengths of plaster specimens ranged from 5.9 to 19.5 MPa. These specimens were molded considering a regular triangular asperity profile and were designed so as to achieve joint walls with different strength material combinations. The results showed that the shear behavior of discontinuities possessing different joint wall compressive strengths (DDJCS) tested under constant normal load (CNL) conditions is the same as those possessing identical joint wall strengths, but the shear strength of DDJCS is governed by minor joint wall compressive strength. In addition, it was measured that the predicted values obtained by Barton's empirical criterion are greater than the experimental results. The finding indicates that there is a correlation between the joint roughness coefficient (JRC), normal stress, and mechanical strength. It was observed that the mode of failure of asperities is either pure tensile, pure shear, or a combination of both. Therefore, Barton's strength criterion, which considers the compressive strength of joint walls, was modified by substituting the compressive strength with the tensile strength. The validity of the modified criterion was examined by the comparison of the predicted shear values with the laboratory shear test results reported by Grasselli (Ph.D. thesis n.2404, Civil Engineering Department, EPFL, Lausanne, Switzerland, 2001). These comparisons infer that the modified criterion can predict the shear strength of joints more precisely.
Avercheva, O V; Berkovich, Yu A; Konovalova, I O; Radchenko, S G; Lapach, S N; Bassarskaya, E M; Kochetova, G V; Zhigalova, T V; Yakovleva, O S; Tarakanov, I G
2016-11-01
The aim of this work were to choose a quantitative optimality criterion for estimating the quality of plant LED lighting regimes inside space greenhouses and to construct regression models of crop productivity and the optimality criterion depending on the level of photosynthetic photon flux density (PPFD), the proportion of the red component in the light spectrum and the duration of the duty cycle (Chinese cabbage Brassica сhinensis L. as an example). The properties of the obtained models were described in the context of predicting crop dry weight and the optimality criterion behavior when varying plant lighting parameters. Results of the fractional 3-factor experiment demonstrated the share of the PPFD level participation in the crop dry weight accumulation was 84.4% at almost any combination of other lighting parameters, but when PPFD value increased up to 500µmol m -2 s -1 the pulse light and supplemental light from red LEDs could additionally increase crop productivity. Analysis of the optimality criterion response to variation of lighting parameters showed that the maximum coordinates were the following: PPFD = 500µmol m -2 s -1 , about 70%-proportion of the red component of the light spectrum (PPFD LEDred /PPFD LEDwhite = 1.5) and the duty cycle with a period of 501µs. Thus, LED crop lighting with these parameters was optimal for achieving high crop productivity and for efficient use of energy in the given range of lighting parameter values. Copyright © 2016 The Committee on Space Research (COSPAR). Published by Elsevier Ltd. All rights reserved.
Detection of Functional Change Using Cluster Trend Analysis in Glaucoma.
Gardiner, Stuart K; Mansberger, Steven L; Demirel, Shaban
2017-05-01
Global analyses using mean deviation (MD) assess visual field progression, but can miss localized changes. Pointwise analyses are more sensitive to localized progression, but more variable so require confirmation. This study assessed whether cluster trend analysis, averaging information across subsets of locations, could improve progression detection. A total of 133 test-retest eyes were tested 7 to 10 times. Rates of change and P values were calculated for possible re-orderings of these series to generate global analysis ("MD worsening faster than x dB/y with P < y"), pointwise and cluster analyses ("n locations [or clusters] worsening faster than x dB/y with P < y") with specificity exactly 95%. These criteria were applied to 505 eyes tested over a mean of 10.5 years, to find how soon each detected "deterioration," and compared using survival models. This was repeated including two subsequent visual fields to determine whether "deterioration" was confirmed. The best global criterion detected deterioration in 25% of eyes in 5.0 years (95% confidence interval [CI], 4.7-5.3 years), compared with 4.8 years (95% CI, 4.2-5.1) for the best cluster analysis criterion, and 4.1 years (95% CI, 4.0-4.5) for the best pointwise criterion. However, for pointwise analysis, only 38% of these changes were confirmed, compared with 61% for clusters and 76% for MD. The time until 25% of eyes showed subsequently confirmed deterioration was 6.3 years (95% CI, 6.0-7.2) for global, 6.3 years (95% CI, 6.0-7.0) for pointwise, and 6.0 years (95% CI, 5.3-6.6) for cluster analyses. Although the specificity is still suboptimal, cluster trend analysis detects subsequently confirmed deterioration sooner than either global or pointwise analyses.
NASA Astrophysics Data System (ADS)
Li, Wei-Yi; Zhang, Qi-Chang; Wang, Wei
2010-06-01
Based on the Silnikov criterion, this paper studies a chaotic system of cubic polynomial ordinary differential equations in three dimensions. Using the Cardano formula, it obtains the exact range of the value of the parameter corresponding to chaos by means of the centre manifold theory and the method of multiple scales combined with Floque theory. By calculating the manifold near the equilibrium point, the series expression of the homoclinic orbit is also obtained. The space trajectory and Lyapunov exponent are investigated via numerical simulation, which shows that there is a route to chaos through period-doubling bifurcation and that chaotic attractors exist in the system. The results obtained here mean that chaos occurred in the exact range given in this paper. Numerical simulations also verify the analytical results.
NASA Technical Reports Server (NTRS)
Lichtenstein, J. H.
1975-01-01
Power-spectral-density calculations were made of the lateral responses to atmospheric turbulence for several conventional and short take-off and landing (STOL) airplanes. The turbulence was modeled as three orthogonal velocity components, which were uncorrelated, and each was represented with a one-dimensional power spectrum. Power spectral densities were computed for displacements, rates, and accelerations in roll, yaw, and sideslip. In addition, the power spectral density of the transverse acceleration was computed. Evaluation of ride quality based on a specific ride quality criterion was also made. The results show that the STOL airplanes generally had larger values for the rate and acceleration power spectra (and, consequently, larger corresponding root-mean-square values) than the conventional airplanes. The ride quality criterion gave poorer ratings to the STOL airplanes than to the conventional airplanes.
Prediction of Fracture Initiation in Hot Compression of Burn-Resistant Ti-35V-15Cr-0.3Si-0.1C Alloy
NASA Astrophysics Data System (ADS)
Zhang, Saifei; Zeng, Weidong; Zhou, Dadi; Lai, Yunjin
2015-11-01
An important concern in hot working of metals is whether the desired deformation can be accomplished without fracture of the material. This paper builds a fracture prediction model to predict fracture initiation in hot compression of a burn-resistant beta-stabilized titanium alloy Ti-35V-15Cr-0.3Si-0.1C using a combined approach of upsetting experiments, theoretical failure criteria and finite element (FE) simulation techniques. A series of isothermal compression experiments on cylindrical specimens were conducted in temperature range of 900-1150 °C, strain rate of 0.01-10 s-1 first to obtain fracture samples and primary reduction data. Based on that, a comparison of eight commonly used theoretical failure criteria was made and Oh criterion was selected and coded into a subroutine. FE simulation of upsetting experiments on cylindrical specimens was then performed to determine the fracture threshold values of Oh criterion. By building a correlation between threshold values and the deforming parameters (temperature and strain rate, or Zener-Hollomon parameter), a new fracture prediction model based on Oh criterion was established. The new model shows an exponential decay relationship between threshold values and Zener-Hollomon parameter (Z), and the relative error of the model is less than 15%. This model was then applied successfully in the cogging of Ti-35V-15Cr-0.3Si-0.1C billet.
De Lena, S M; Gende, O A; Almirón, M A; Cingolani, H E
1994-09-01
To determine prevalence of diastolic arterial hypertension (DAH) in young individuals using different criteria. Secondly, to test the possible different blood pressure reactions to mental stress and hand grip in two groups: group A, a 'low blood pressure group', and group B, diastolic blood pressure 90 mmHg or greater in one interview and below these values in a second interview. A total of 1423 volunteer medical students was recruited at La Plata School of Medicine, average age 21 +/- 3 years. Systolic and diastolic blood pressure were measured three times on two different occasions separated by one week. With the values obtained, prevalence of arterial hypertension was determined according to the criteria suggested by The Joint National Committee 4 (JNC-4) and the World Health Organization (WHO), and to statistical bases. Mental stress and hand grip tests were performed by groups A and B. The prevalence of DAH when only the first determination of the first interview was considered was 14.7%, 6.7% (considering the WHO criterion) or 5% (using the statistical criterion). These values are reduced if repeated measurements are averaged. The greatest reduction was obtained when the JNC-4 criterion was used (1.6%). The reactivity of stressors did not show any relationship with the initial blood pressure of the subjects. In epidemiological studies, the differences among the criteria should be considered when analyzing blood pressure of populations. Stress tests (mental stress and hand grip) do not help in identifying differences between the groups studied.
The H2/CH4 ratio during serpentinization cannot reliably identify biological signatures
Huang, Ruifang; Sun, Weidong; Liu, Jinzhong; Ding, Xing; Peng, Shaobang; Zhan, Wenhuan
2016-01-01
Serpentinization potentially contributes to the origin and evolution of life during early history of the Earth. Serpentinization produces molecular hydrogen (H2) that can be utilized by microorganisms to gain metabolic energy. Methane can be formed through reactions between molecular hydrogen and oxidized carbon (e.g., carbon dioxide) or through biotic processes. A simple criterion, the H2/CH4 ratio, has been proposed to differentiate abiotic from biotic methane, with values approximately larger than 40 for abiotic methane and values of <40 for biotic methane. The definition of the criterion was based on two serpentinization experiments at 200 °C and 0.3 kbar. However, it is not clear whether the criterion is applicable at a wider range of temperatures. In this study, we performed sixteen experiments at 311–500 °C and 3.0 kbar using natural ground peridotite. Our results demonstrate that the H2/CH4 ratios strongly depend on temperature. At 311 °C and 3.0 kbar, the H2/CH4 ratios ranged from 58 to 2,120, much greater than the critical value of 40. By contrast, at 400–500 °C, the H2/CH4 ratios were much lower, ranging from 0.1 to 8.2. The results of this study suggest that the H2/CH4 ratios cannot reliably discriminate abiotic from biotic methane. PMID:27666288
Vector autoregressive model approach for forecasting outflow cash in Central Java
NASA Astrophysics Data System (ADS)
hoyyi, Abdul; Tarno; Maruddani, Di Asih I.; Rahmawati, Rita
2018-05-01
Multivariate time series model is more applied in economic and business problems as well as in other fields. Applications in economic problems one of them is the forecasting of outflow cash. This problem can be viewed globally in the sense that there is no spatial effect between regions, so the model used is the Vector Autoregressive (VAR) model. The data used in this research is data on the money supply in Bank Indonesia Semarang, Solo, Purwokerto and Tegal. The model used in this research is VAR (1), VAR (2) and VAR (3) models. Ordinary Least Square (OLS) is used to estimate parameters. The best model selection criteria use the smallest Akaike Information Criterion (AIC). The result of data analysis shows that the AIC value of VAR (1) model is equal to 42.72292, VAR (2) equals 42.69119 and VAR (3) equals 42.87662. The difference in AIC values is not significant. Based on the smallest AIC value criteria, the best model is the VAR (2) model. This model has satisfied the white noise assumption.
Trial-to-trial carry-over of item- and relational-information in auditory short-term memory
Visscher, Kristina M.; Kahana, Michael J.; Sekuler, Robert
2009-01-01
Using a short-term recognition memory task we evaluated the carry-over across trials of two types of auditory information: the characteristics of individual study sounds (item information), and the relationships between the study sounds (relational information). On each trial, subjects heard two successive broadband study sounds and then decided whether a subsequently presented probe sound had been in the study set. On some trials, the probe item's similarity to stimuli presented on the preceding trial was manipulated. This item information interfered with recognition, increasing false alarms from 0.4% to 4.4%. Moreover, the interference was tuned so that only stimuli very similar to each other interfered. On other trials, the relationship among stimuli was manipulated in order to alter the criterion subjects used in making recognition judgments. The effect of this manipulation was confined to the very trial on which the criterion change was generated, and did not affect the subsequent trial. These results demonstrate the existence of a sharply-tuned carry-over of auditory item information, but no carry-over of relational information. PMID:19210080
[Evaluation and improvement of the management of informed consent in the emergency department].
del Pozo, P; García, J A; Escribano, M; Soria, V; Campillo-Soto, A; Aguayo-Albasini, J L
2009-01-01
To assess the preoperative management in our emergency surgical service and to improve the quality of the care provided to patients. In order to find the causes of non-compliance, the Ishikawa Fishbone diagram was used and eight assessment criteria were chosen. The first assessment includes 120 patients operated on from January to April 2007. Corrective measures were implemented, which consisted of meetings and conferences with doctors and nurses, insisting on the importance of the informed consent as a legal document which must be signed by patients, and the obligation of giving a copy to patients or relatives. The second assessment includes the period from July to October 2007 (n=120). We observed a high non-compliance of C1 signing of surgical consent (CRITERION 1: all patients or relatives have to sign the surgical informed consent for the operation to be performed [27.5%]) and C2 giving a copy of the surgical consent (CRITERION 2: all patients or relatives must have received a copy of the surgical informed consent for the Surgery to be performed [72.5%]) and C4 anaesthetic consent copy (CRITERION 4: all patients or relatives must have received a copy of the Anaesthesia informed consent corresponding to the operation performed [90%]). After implementing corrective measures a significant improvement was observed in the compliance of C2 and C4. In C1 there was an improvement without statistical significance. The carrying out of an improvement cycle enabled the main objective of this paper to be achieved: to improve the management of informed consent and the quality of the care and information provided to our patients.
Economic weights for genetic improvement of lactation persistency and milk yield.
Togashi, K; Lin, C Y
2009-06-01
This study aimed to establish a criterion for measuring the relative weight of lactation persistency (the ratio of yield at 280 d in milk to peak yield) in restricted selection index for the improvement of net merit comprising 3-parity total yield and total lactation persistency. The restricted selection index was compared with selection based on first-lactation total milk yield (I(1)), the first-two-lactation total yield (I(2)), and first-three-lactation total yield (I(3)). Results show that genetic response in net merit due to selection on restricted selection index could be greater than, equal to, or less than that due to the unrestricted index depending upon the relative weight of lactation persistency and the restriction level imposed. When the relative weight of total lactation persistency is equal to the criterion, the restricted selection index is equal to the selection method compared (I(1), I(2), or I(3)). The restricted selection index yielded a greater response when the relative weight of total lactation persistency was above the criterion, but a lower response when it was below the criterion. The criterion varied depending upon the restriction level (c) imposed and the selection criteria compared. A curvilinear relationship (concave curve) exists between the criterion and the restricted level. The criterion increases as the restriction level deviates in either direction from 1.5. Without prior information of the economic weight of lactation persistency, the imposition of the restriction level of 1.5 on lactation persistency would maximize change in net merit. The procedure presented allows for simultaneous modification of multi-parity lactation curves.
Iwata, Shintaro; Uehara, Kosuke; Ogura, Koichi; Akiyama, Toru; Shinoda, Yusuke; Yonemoto, Tsukasa; Kawai, Akira
2016-09-01
The Musculoskeletal Tumor Society (MSTS) scoring system is a widely used functional evaluation tool for patients treated for musculoskeletal tumors. Although the MSTS scoring system has been validated in English and Brazilian Portuguese, a Japanese version of the MSTS scoring system has not yet been validated. We sought to determine whether a Japanese-language translation of the MSTS scoring system for the lower extremity had (1) sufficient reliability and internal consistency, (2) adequate construct validity, and (3) reasonable criterion validity compared with the Toronto Extremity Salvage Score (TESS) and SF-36 using psychometric analysis. The Japanese version of the MSTS scoring system was developed using accepted guidelines, which included translation of the English version of the MSTS into Japanese by five native Japanese bilingual musculoskeletal oncology surgeons and integrated into one document. One hundred patients with a diagnosis of intermediate or malignant bone or soft tissue tumors located in the lower extremity and who had undergone tumor resection with or without reconstruction or amputation participated in this study. Reliability was evaluated by test-retest analysis, and internal consistency was established by Cronbach's alpha coefficient. Construct validity was evaluated using the principal factor analysis and Akaike information criterion network. Criterion validity was evaluated by comparing the MSTS scoring system with the TESS and SF-36. Test-retest analysis showed a high intraclass correlation coefficient (0.92; 95% CI, 0.88-0.95), indicating high reliability of the Japanese version of the MSTS scoring system, although a considerable ceiling effect was observed, with 23 patients (23%) given the maximum score. Cronbach's alpha coefficient was 0.87 (95% CI, 0.82-0.90), suggesting a high level of internal consistency. Factor analysis revealed that all items had high loading values and communalities; we identified a central role for the items "walking" and "gait" according to the Akaike information criterion network. The total MSTS score was correlated with that of the TESS (r = 0.81; 95% CI, 0.73-0.87; p < 0.001) and the physical component summary and physical functioning of the SF-36. The Japanese-language translation of the MSTS scoring system for the lower extremity has sufficient reliability and reasonable validity. Nevertheless, the observation of a ceiling effect suggests poor ability of this system to discriminate from among patients who have a high level of function.
Bayesian meta-analysis of Cronbach's coefficient alpha to evaluate informative hypotheses.
Okada, Kensuke
2015-12-01
This paper proposes a new method to evaluate informative hypotheses for meta-analysis of Cronbach's coefficient alpha using a Bayesian approach. The coefficient alpha is one of the most widely used reliability indices. In meta-analyses of reliability, researchers typically form specific informative hypotheses beforehand, such as 'alpha of this test is greater than 0.8' or 'alpha of one form of a test is greater than the others.' The proposed method enables direct evaluation of these informative hypotheses. To this end, a Bayes factor is calculated to evaluate the informative hypothesis against its complement. It allows researchers to summarize the evidence provided by previous studies in favor of their informative hypothesis. The proposed approach can be seen as a natural extension of the Bayesian meta-analysis of coefficient alpha recently proposed in this journal (Brannick and Zhang, 2013). The proposed method is illustrated through two meta-analyses of real data that evaluate different kinds of informative hypotheses on superpopulation: one is that alpha of a particular test is above the criterion value, and the other is that alphas among different test versions have ordered relationships. Informative hypotheses are supported from the data in both cases, suggesting that the proposed approach is promising for application. Copyright © 2015 John Wiley & Sons, Ltd.
Wheeler, David C.; Hickson, DeMarc A.; Waller, Lance A.
2010-01-01
Many diagnostic tools and goodness-of-fit measures, such as the Akaike information criterion (AIC) and the Bayesian deviance information criterion (DIC), are available to evaluate the overall adequacy of linear regression models. In addition, visually assessing adequacy in models has become an essential part of any regression analysis. In this paper, we focus on a spatial consideration of the local DIC measure for model selection and goodness-of-fit evaluation. We use a partitioning of the DIC into the local DIC, leverage, and deviance residuals to assess local model fit and influence for both individual observations and groups of observations in a Bayesian framework. We use visualization of the local DIC and differences in local DIC between models to assist in model selection and to visualize the global and local impacts of adding covariates or model parameters. We demonstrate the utility of the local DIC in assessing model adequacy using HIV prevalence data from pregnant women in the Butare province of Rwanda during 1989-1993 using a range of linear model specifications, from global effects only to spatially varying coefficient models, and a set of covariates related to sexual behavior. Results of applying the diagnostic visualization approach include more refined model selection and greater understanding of the models as applied to the data. PMID:21243121
Natural learning in NLDA networks.
González, Ana; Dorronsoro, José R
2007-07-01
Non Linear Discriminant Analysis (NLDA) networks combine a standard Multilayer Perceptron (MLP) transfer function with the minimization of a Fisher analysis criterion. In this work we will define natural-like gradients for NLDA network training. Instead of a more principled approach, that would require the definition of an appropriate Riemannian structure on the NLDA weight space, we will follow a simpler procedure, based on the observation that the gradient of the NLDA criterion function J can be written as the expectation nablaJ(W)=E[Z(X,W)] of a certain random vector Z and defining then I=E[Z(X,W)Z(X,W)(t)] as the Fisher information matrix in this case. This definition of I formally coincides with that of the information matrix for the MLP or other square error functions; the NLDA J criterion, however, does not have this structure. Although very simple, the proposed approach shows much faster convergence than that of standard gradient descent, even when its costlier complexity is taken into account. While the faster convergence of natural MLP batch training can be also explained in terms of its relationship with the Gauss-Newton minimization method, this is not the case for NLDA training, as we will see analytically and numerically that the hessian and information matrices are different.
Williams, Marshall L.; MacCoy, Dorene E.
2016-06-30
Mercury (Hg) analyses were conducted on samples of sport fish and water collected from selected sampling sites in Brownlee Reservoir and the Boise and Snake Rivers to meet National Pollution Discharge and Elimination System (NPDES) permit requirements for the City of Boise, Idaho, between 2013 and 2015. City of Boise personnel collected water samples from six sites between October and November 2013 and 2015, with one site sampled in 2014. Total Hg concentrations in unfiltered water samples ranged from 0.48 to 8.8 nanograms per liter (ng/L), with the highest value in Brownlee Reservoir in 2013. All Hg concentrations in water samples were less than the U.S. Environmental Protection Agency (USEPA) Hg chronic aquatic life criterion of 12 ng/L.The USEPA recommended a water-quality criterion of 0.30 milligrams per kilogram (mg/kg) methylmercury (MeHg) expressed as a fish-tissue residue value (wet-weight MeHg in fish tissue). The Idaho Department of Environmental Quality adopted the USEPA’s fish-tissue criterion and established a reasonable potential to exceed (RPTE) threshold 20 percent lower than the criterion or greater than 0.24 mg/kg Hg based on an average concentration of 10 fish from a receiving waterbody. NPDES permitted discharge to waters with fish having Hg concentrations exceeding 0.24 mg/kg are said to have a reasonable potential to exceed the water-quality criterion and thus are subject to additional permit obligations, such as requirements for increased monitoring and the development of a Hg minimization plan. The Idaho Fish Consumption Advisory Program (IFCAP) issues fish advisories to protect general and sensitive populations of fish consumers and has developed an action level of 0.22 mg/kg Hg in fish tissue. Fish consumption advisories are water body- and species-specific and are used to advise allowable fish consumption from specific water bodies. The geometric mean Hg concentration of 10 fish of a single species collected from a single water body (lake or stream) in Idaho is compared to the action level to determine if a fish consumption advisory should be issued.The U.S. Geological Survey collected and analyzed individual fillets of mountain whitefish (Prosopium williamsoni), rainbow trout (Oncorhynchus mykiss), smallmouth bass (Micropterus dolomieu), and channel catfish (Ictalurus punctatus) for Hg. The 2013 average Hg concentration for small mouth bass (0.32 mg/kg) collected at Brownlee Reservoir and for channel catfish (0.33 mg/kg) collected at the Boise River mouth, exceeded the Idaho water quality criterion (>0.3 mg/kg), the Hg RPTE threshold (>0.24 mg/kg), and the IFCAP action level (>0.22 mg/kg). Average Hg concentrations in fish collected in 2014 or 2015 did not exceed evaluation criteria for any of the species assessed.Selenium (Se) analysis was conducted on one composite fish tissue sample per site to assess general concentrations and to provide information for future risk assessments. Composite concentrations of Se in fish tissue collected between 2013 and 2015 ranged from 0.07 and 0.49 mg/kg wet weight with the highest concentration collected from smallmouth bass from the Snake River near Murphy, and the lowest from mountain whitefish from the Boise River at Eckert Road.
Bai, Jing; Yang, Wei; Wang, Song; Guan, Rui-Hong; Zhang, Hui; Fu, Jing-Jing; Wu, Wei; Yan, Kun
2016-07-01
The purpose of this study was to explore the diagnostic value of the arrival time difference between lesions and surrounding lung tissue on contrast-enhanced sonography of subpleural pulmonary lesions. A total of 110 patients with subpleural pulmonary lesions who underwent both conventional and contrast-enhanced sonography and had a definite diagnosis were enrolled. After contrast agent injection, the arrival times in the lesion, lung, and chest wall were recorded. The arrival time differences between various tissues were also calculated. Statistical analysis showed a significant difference in the lesion arrival time, the arrival time difference between the lesion and lung, and the arrival time difference between the chest wall and lesion (all P < .001) for benign and malignant lesions. Receiver operating characteristic curve analysis revealed that the optimal diagnostic criterion was the arrival time difference between the lesion and lung, and that the best cutoff point was 2.5 seconds (later arrival signified malignancy). This new diagnostic criterion showed superior diagnostic accuracy (97.1%) compared to conventional diagnostic criteria. The individualized diagnostic method based on an arrival time comparison using contrast-enhanced sonography had high diagnostic accuracy (97.1%) with good feasibility and could provide useful diagnostic information for subpleural pulmonary lesions.
da Silva, Wanderson Roberto; Dias, Juliana Chioda Ribeiro; Maroco, João; Campos, Juliana Alvares Duarte Bonini
2014-09-01
This study aimed at evaluating the validity, reliability, and factorial invariance of the complete (34-item) and shortened (8-item and 16-item) versions of the Body Shape Questionnaire (BSQ) when applied to Brazilian university students. A total of 739 female students with a mean age of 20.44 (standard deviation=2.45) years participated. Confirmatory factor analysis was conducted to verify the degree to which the one-factor structure satisfies the proposal for the BSQ's expected structure. Two items of the 34-item version were excluded because they had factor weights (λ)<40. All models had adequate convergent validity (average variance extracted=.43-.58; composite reliability=.85-.97) and internal consistency (α=.85-.97). The 8-item B version was considered the best shortened BSQ version (Akaike information criterion=84.07, Bayes information criterion=157.75, Browne-Cudeck criterion=84.46), with strong invariance for independent samples (Δχ(2)λ(7)=5.06, Δχ(2)Cov(8)=5.11, Δχ(2)Res(16)=19.30). Copyright © 2014 Elsevier Ltd. All rights reserved.
Accuracy of visual estimates of joint angle and angular velocity using criterion movements.
Morrison, Craig S; Knudson, Duane; Clayburn, Colby; Haywood, Philip
2005-06-01
A descriptive study to document undergraduate physical education majors' (22.8 +/- 2.4 yr. old) estimates of sagittal plane elbow angle and angular velocity of elbow flexion visually was performed. 42 subjects rated videotape replays of 30 movements organized into three speeds of movement and two criterion elbow angles. Video images of the movements were analyzed with Peak Motus to measure actual values of elbow angles and peak angular velocity. Of the subjects 85.7% had speed ratings significantly correlated with true peak elbow angular velocity in all three angular velocity conditions. Few (16.7%) subjects' ratings of elbow angle correlated significantly with actual angles. Analysis of the subjects with good ratings showed the accuracy of visual ratings was significantly related to speed, with decreasing accuracy for slower speeds of movement. The use of criterion movements did not improve the small percentage of novice observers who could accurately estimate body angles during movement.
Lu, Dan; Ye, Ming; Meyer, Philip D.; Curtis, Gary P.; Shi, Xiaoqing; Niu, Xu-Feng; Yabusaki, Steve B.
2013-01-01
When conducting model averaging for assessing groundwater conceptual model uncertainty, the averaging weights are often evaluated using model selection criteria such as AIC, AICc, BIC, and KIC (Akaike Information Criterion, Corrected Akaike Information Criterion, Bayesian Information Criterion, and Kashyap Information Criterion, respectively). However, this method often leads to an unrealistic situation in which the best model receives overwhelmingly large averaging weight (close to 100%), which cannot be justified by available data and knowledge. It was found in this study that this problem was caused by using the covariance matrix, CE, of measurement errors for estimating the negative log likelihood function common to all the model selection criteria. This problem can be resolved by using the covariance matrix, Cek, of total errors (including model errors and measurement errors) to account for the correlation between the total errors. An iterative two-stage method was developed in the context of maximum likelihood inverse modeling to iteratively infer the unknown Cek from the residuals during model calibration. The inferred Cek was then used in the evaluation of model selection criteria and model averaging weights. While this method was limited to serial data using time series techniques in this study, it can be extended to spatial data using geostatistical techniques. The method was first evaluated in a synthetic study and then applied to an experimental study, in which alternative surface complexation models were developed to simulate column experiments of uranium reactive transport. It was found that the total errors of the alternative models were temporally correlated due to the model errors. The iterative two-stage method using Cekresolved the problem that the best model receives 100% model averaging weight, and the resulting model averaging weights were supported by the calibration results and physical understanding of the alternative models. Using Cek obtained from the iterative two-stage method also improved predictive performance of the individual models and model averaging in both synthetic and experimental studies.
Validation of the Chinese Version of the Quality of Nursing Work Life Scale
Fu, Xia; Xu, Jiajia; Song, Li; Li, Hua; Wang, Jing; Wu, Xiaohua; Hu, Yani; Wei, Lijun; Gao, Lingling; Wang, Qiyi; Lin, Zhanyi; Huang, Huigen
2015-01-01
Quality of Nursing Work Life (QNWL) serves as a predictor of a nurse’s intent to leave and hospital nurse turnover. However, QNWL measurement tools that have been validated for use in China are lacking. The present study evaluated the construct validity of the QNWL scale in China. A cross-sectional study was conducted conveniently from June 2012 to January 2013 at five hospitals in Guangzhou, which employ 1938 nurses. The participants were asked to complete the QNWL scale and the World Health Organization Quality of Life abbreviated version (WHOQOL-BREF). A total of 1922 nurses provided the final data used for analyses. Sixty-five nurses from the first investigated division were re-measured two weeks later to assess the test-retest reliability of the scale. The internal consistency reliability of the QNWL scale was assessed using Cronbach’s α. Test-retest reliability was assessed using the intra-class correlation coefficient (ICC). Criterion-relation validity was assessed using the correlation of the total scores of the QNWL and the WHOQOL-BREF. Construct validity was assessed with the following indices: χ2 statistics and degrees of freedom; relative mean square error of approximation (RMSEA); the Akaike information criterion (AIC); the consistent Akaike information criterion (CAIC); the goodness-of-fit index (GFI); the adjusted goodness of fit index; and the comparative fit index (CFI). The findings demonstrated high internal consistency (Cronbach’s α = 0.912) and test-retest reliability (interclass correlation coefficient = 0.74) for the QNWL scale. The chi-square test (χ2 = 13879.60, df [degree of freedom] = 813 P = 0.0001) was significant. The RMSEA value was 0.091, and AIC = 1806.00, CAIC = 7730.69, CFI = 0.93, and GFI = 0.74. The correlation coefficient between the QNWL total scores and the WHOQOL-BREF total scores was 0.605 (p<0.01). The QNWL scale was reliable and valid in Chinese-speaking nurses and could be used as a clinical and research instrument for measuring work-related factors among nurses in China. PMID:25950838
Tamboer, Peter; Vorst, Harrie C M; Oort, Frans J
2014-04-01
Methods for identifying dyslexia in adults vary widely between studies. Researchers have to decide how many tests to use, which tests are considered to be the most reliable, and how to determine cut-off scores. The aim of this study was to develop an objective and powerful method for diagnosing dyslexia. We took various methodological measures, most of which are new compared to previous methods. We used a large sample of Dutch first-year psychology students, we considered several options for exclusion and inclusion criteria, we collected as many cognitive tests as possible, we used six independent sources of biographical information for a criterion of dyslexia, we compared the predictive power of discriminant analyses and logistic regression analyses, we used both sum scores and item scores as predictor variables, we used self-report questions as predictor variables, and we retested the reliability of predictions with repeated prediction analyses using an adjusted criterion. We were able to identify 74 dyslexic and 369 non-dyslexic students. For 37 students, various predictions were too inconsistent for a final classification. The most reliable predictions were acquired with item scores and self-report questions. The main conclusion is that it is possible to identify dyslexia with a high reliability, although the exact nature of dyslexia is still unknown. We therefore believe that this study yielded valuable information for future methods of identifying dyslexia in Dutch as well as in other languages, and that this would be beneficial for comparing studies across countries.
Methods for threshold determination in multiplexed assays
Tammero, Lance F. Bentley; Dzenitis, John M; Hindson, Benjamin J
2014-06-24
Methods for determination of threshold values of signatures comprised in an assay are described. Each signature enables detection of a target. The methods determine a probability density function of negative samples and a corresponding false positive rate curve. A false positive criterion is established and a threshold for that signature is determined as a point at which the false positive rate curve intersects the false positive criterion. A method for quantitative analysis and interpretation of assay results together with a method for determination of a desired limit of detection of a signature in an assay are also described.
How Can We Get the Information about Democracy? The Example of Social Studies Prospective Teachers
ERIC Educational Resources Information Center
Tonga, Deniz
2014-01-01
In this research, the information about democracy, which social studies prospective teachers have, and interpretation of the information sources are aimed. The research was planned as a survey research methodology and the participants were determined with criterion sampling method. The data were collected through developed open-ended questions…
James L. Howard; David B. McKeever; Ted Bilek
2016-01-01
Trend data through 2011 are provided on the value and volume of wood and wood products production for the United States to aid in assessing sustainability of socioeconomic benefits of forests. The volume of roundwood used to make products, the weight produced, and the value of U.S. shipments have fluctuated in recent years. But there have been decreases in the weight...
James L. Howard; Rebecca Westby; Kenneth E. Skog
2010-01-01
Trend data through 2006 are provided on the value and volume of wood and wood products production for the United States to aid in assessing sustainability of socioeconomic benefits of forests. The volume of roundwood used to make products, the weight produced, and the value of U.S. shipments have been stable to declining in recent years. But there have been increases...
NASA Astrophysics Data System (ADS)
Urano, C.; Yamazawa, K.; Kaneko, N.-H.
2017-12-01
We report on our measurement of the Boltzmann constant by Johnson noise thermometry (JNT) using an integrated quantum voltage noise source (IQVNS) that is fully implemented with superconducting integrated circuit technology. The IQVNS generates calculable pseudo white noise voltages to calibrate the JNT system. The thermal noise of a sensing resistor placed at the temperature of the triple point of water was measured precisely by the IQVNS-based JNT. We accumulated data of more than 429 200 s in total (over 6 d) and used the Akaike information criterion to estimate the fitting frequency range for the quadratic model to calculate the Boltzmann constant. Upon detailed evaluation of the uncertainty components, the experimentally obtained Boltzmann constant was k=1.380 6436× {{10}-23} J K-1 with a relative combined uncertainty of 10.22× {{10}-6} . The value of k is relatively -3.56× {{10}-6} lower than the CODATA 2014 value (Mohr et al 2016 Rev. Mod. Phys. 88 035009).
An Improved Statistical Solution for Global Seismicity by the HIST-ETAS Approach
NASA Astrophysics Data System (ADS)
Chu, A.; Ogata, Y.; Katsura, K.
2010-12-01
For long-term global seismic model fitting, recent work by Chu et al. (2010) applied the spatial-temporal ETAS model (Ogata 1998) and analyzed global data partitioned into tectonic zones based on geophysical characteristics (Bird 2003), and it has shown tremendous improvements of model fitting compared with one overall global model. While the ordinary ETAS model assumes constant parameter values across the complete region analyzed, the hierarchical space-time ETAS model (HIST-ETAS, Ogata 2004) is a newly introduced approach by proposing regional distinctions of the parameters for more accurate seismic prediction. As the HIST-ETAS model has been fit to regional data of Japan (Ogata 2010), our work applies the model to describe global seismicity. Employing the Akaike's Bayesian Information Criterion (ABIC) as an assessment method, we compare the MLE results with zone divisions considered to results obtained by an overall global model. Location dependent parameters of the model and Gutenberg-Richter b-values are optimized, and seismological interpretations are discussed.
Breakdown parameter for kinetic modeling of multiscale gas flows.
Meng, Jianping; Dongari, Nishanth; Reese, Jason M; Zhang, Yonghao
2014-06-01
Multiscale methods built purely on the kinetic theory of gases provide information about the molecular velocity distribution function. It is therefore both important and feasible to establish new breakdown parameters for assessing the appropriateness of a fluid description at the continuum level by utilizing kinetic information rather than macroscopic flow quantities alone. We propose a new kinetic criterion to indirectly assess the errors introduced by a continuum-level description of the gas flow. The analysis, which includes numerical demonstrations, focuses on the validity of the Navier-Stokes-Fourier equations and corresponding kinetic models and reveals that the new criterion can consistently indicate the validity of continuum-level modeling in both low-speed and high-speed flows at different Knudsen numbers.
Gupta, Karan; Mandlik, Dushyant; Patel, Daxesh; Patel, Purvi; Shah, Bankim; Vijay, Devanhalli G; Kothari, Jagdish M; Toprani, Rajendra B; Patel, Kaustubh D
2016-09-01
Tracheostomy is a mainstay modality for airway management for patients with head-neck cancer undergoing surgery. This study aims to define factors predicting need of tracheostomy and define an effective objective criterion to predict tracheostomy need. 486 patients undergoing composite resections were studied. Factors analyzed were age, previous surgery, extent of surgery, trismus, extent of mandibular resection and reconstruction etc. Factors were divided into major and minor, using the clinical assessment scoring system for tracheostomy (CASST) criterion. Sixty seven (13.7%) patients required tracheostomy for their peri-operative management. Elective tracheostomies were done in 53 cases during surgery and post-operatively in 14 patients. All patients in whom tracheostomies were anticipated had a score of seven or more. A decision on whether or not an elective tracheotomy in head and neck surgery is necessary and can be facilitated using CASST criterion, which has a sensitivity of 95.5% and a negative predictive value (NPV) of 99.3%. It may reduce post-operative complications and contribute to safer treatment. Copyright © 2016 European Association for Cranio-Maxillo-Facial Surgery. Published by Elsevier Ltd. All rights reserved.
Behavior of collisional sheath in electronegative plasma with q-nonextensive electron distribution
NASA Astrophysics Data System (ADS)
Borgohain, Dima Rani; Saharia, K.
2018-03-01
Electronegative plasma sheath is addressed in a collisional unmagnetized plasma consisting of q-nonextensive electrons, Boltzmann distributed negative ions and cold fluid positive ions. Considering the positive ion-neutral collisions and ignoring the effects of ionization and collisions between negative species and positive ions (neutrals), a modified Bohm sheath criterion and hence floating potential are derived by using multifluid model. Using the modified Bohm sheath criterion, the sheath characteristics such as spatial profiles of density, potential and net space charge density have been numerically investigated. It is found that increasing values of q-nonextensivity, electronegativity and collisionality lead to a decrease of the sheath thickness and an increase of the sheath potential and the net space charge density. With increasing values of the electron temperature to negative ion temperature ratio, the sheath thickness increases and the sheath potential as well as the net space charge density in the sheath region decreases.
Towards a new tool for the evaluation of the quality of ultrasound compressed images.
Delgorge, Cécile; Rosenberger, Christophe; Poisson, Gérard; Vieyres, Pierre
2006-11-01
This paper presents a new tool for the evaluation of ultrasound image compression. The goal is to measure the image quality as easily as with a statistical criterion, and with the same reliability as the one provided by the medical assessment. An initial experiment is proposed to medical experts and represents our reference value for the comparison of evaluation criteria. Twenty-one statistical criteria are selected from the literature. A cumulative absolute similarity measure is defined as a distance between the criterion to evaluate and the reference value. A first fusion method based on a linear combination of criteria is proposed to improve the results obtained by each of them separately. The second proposed approach combines different statistical criteria and uses the medical assessment in a training phase with a support vector machine. Some experimental results are given and show the benefit of fusion.
Palaniappan, A K
1994-12-01
A bilingual version of Shostrom's Self-actualization Value subscale of the Personal Orientation Inventory was administered to 62 Malaysian students. For the 26-item paired-opposite inventory, test-retest reliability over 6 mo. was .39 (for boys .42, for girls .37) and criterion validity was .57. Replication with other groups is recommended.
Prediction of Central Burst Defects in Copper Wire Drawing Process
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vega, G.; NEXANS France, NMC Nexans Metallurgy Centre, Boulevard du Marais, BP39, F-62301 Lens; Haddi, A.
2011-01-17
In this study, the prediction of chevron cracks (central bursts) in copper wire drawing process is investigated using experimental and numerical approaches. The conditions of the chevron cracks creation along the wire axis depend on (i) the die angle, the friction coefficient between the die and the wire, (ii) the reduction in crosssectional area of the wire, (iii) the material properties and (iv) the drawing velocity or strain rate. Under various drawing conditions, a numerical simulation for the prediction of central burst defects is presented using an axisymmetric finite element model. This model is based on the application of themore » Cockcroft and Latham fracture criterion. This criterion was used as the damage value to estimate if and where defects will occur during the copper wire drawing. The critical damage value of the material is obtained from a uniaxial tensile test. The results show that the die angle and the reduction ratio have a significant effect on the stress distribution and the maximum damage value. The central bursts are expected to occur when the die angle and reduction ratio reach a critical value. Numerical predictions are compared with experimental observations.« less
Measures and Interpretations of Vigilance Performance: Evidence Against the Detection Criterion
NASA Technical Reports Server (NTRS)
Balakrishnan, J. D.
1998-01-01
Operators' performance in a vigilance task is often assumed to depend on their choice of a detection criterion. When the signal rate is low this criterion is set high, causing the hit and false alarm rates to be low. With increasing time on task the criterion presumably tends to increase even further, thereby further decreasing the hit and false alarm rates. Virtually all of the empirical evidence for this simple interpretation is based on estimates of the bias measure Beta from signal detection theory. In this article, I describe a new approach to studying decision making that does not require the technical assumptions of signal detection theory. The results of this new analysis suggest that the detection criterion is never biased toward either response, even when the signal rate is low and the time on task is long. Two modifications of the signal detection theory framework are considered to account for this seemingly paradoxical result. The first assumes that the signal rate affects the relative sizes of the variances of the information distributions; the second assumes that the signal rate affects the logic of the operator's stopping rule. Actual or potential applications of this research include the improved training and performance assessment of operators in areas such as product quality control, air traffic control, and medical and clinical diagnosis.
Event-based cluster synchronization of coupled genetic regulatory networks
NASA Astrophysics Data System (ADS)
Yue, Dandan; Guan, Zhi-Hong; Li, Tao; Liao, Rui-Quan; Liu, Feng; Lai, Qiang
2017-09-01
In this paper, the cluster synchronization of coupled genetic regulatory networks with a directed topology is studied by using the event-based strategy and pinning control. An event-triggered condition with a threshold consisting of the neighbors' discrete states at their own event time instants and a state-independent exponential decay function is proposed. The intra-cluster states information and extra-cluster states information are involved in the threshold in different ways. By using the Lyapunov function approach and the theories of matrices and inequalities, we establish the cluster synchronization criterion. It is shown that both the avoidance of continuous transmission of information and the exclusion of the Zeno behavior are ensured under the presented triggering condition. Explicit conditions on the parameters in the threshold are obtained for synchronization. The stability criterion of a single GRN is also given under the reduced triggering condition. Numerical examples are provided to validate the theoretical results.
Diagnosing Postural Tachycardia Syndrome: Comparison of Tilt Test versus Standing Hemodynamics
Plash, Walker B; Diedrich, André; Biaggioni, Italo; Garland, Emily M; Paranjape, Sachin Y; Black, Bonnie K; Dupont, William D; Raj, Satish R
2012-01-01
Postural tachycardia syndrome (POTS) is characterized by increased heart rate (ΔHR) of ≥30 bpm with symptoms related to upright posture. Active stand (STAND) and passive head-up tilt (TILT) produce different physiological responses. We hypothesized these different responses would affect the ability of individuals to achieve the POTS HR increase criterion. Patients with POTS (n=15) and healthy controls (n=15) underwent 30 min of TILT and STAND testing. ΔHR values were analyzed at 5 min intervals. Receiver Operating Characteristics analysis was performed to determine optimal cut point values of ΔHR for both TILT and STAND. TILT produced larger ΔHR than STAND for all 5 min intervals from 5 min (38±3 bpm vs. 33±3 bpm; P=0.03) to 30 min (51±3 bpm vs. 38±3 bpm; P<0.001). Sensitivity (Sn) of the 30 bpm criterion was similar for all tests (TILT-10=93%, STAND-10=87%, TILT30=100%, and STAND30=93%). Specificity (Sp) of the 30 bpm criterion was less at both 10 and 30 min for TILT (TILT10=40%, TILT30=20%) than STAND (STAND10=67%, STAND30=53%). The optimal ΔHR to discriminate POTS at 10 min were 38 bpm (TILT) and 29 bpm (STAND), and at 30 min were 47 bpm (TILT) and 34 bpm (STAND). Orthostatic tachycardia was greater for TILT (with lower specificity for POTS diagnosis) than STAND at 10 and 30 min. The 30 bpm ΔHR criterion is not suitable for 30 min TILT. Diagnosis of POTS should consider orthostatic intolerance criteria and not be based solely on orthostatic tachycardia regardless of test used. PMID:22931296
Plash, Walker B; Diedrich, André; Biaggioni, Italo; Garland, Emily M; Paranjape, Sachin Y; Black, Bonnie K; Dupont, William D; Raj, Satish R
2013-01-01
POTS (postural tachycardia syndrome) is characterized by an increased heart rate (ΔHR) of ≥30 bpm (beats/min) with symptoms related to upright posture. Active stand (STAND) and passive head-up tilt (TILT) produce different physiological responses. We hypothesized these different responses would affect the ability of individuals to achieve the POTS HR increase criterion. Patients with POTS (n=15) and healthy controls (n=15) underwent 30 min of tilt and stand testing. ΔHR values were analysed at 5 min intervals. ROC (receiver operating characteristic) analysis was performed to determine optimal cut point values of ΔHR for both tilt and stand. Tilt produced larger ΔHR than stand for all 5 min intervals from 5 min (38±3 bpm compared with 33±3 bpm; P=0.03) to 30 min (51±3 bpm compared with 38±3 bpm; P<0.001). Sn (sensitivity) of the 30 bpm criterion was similar for all tests (TILT10=93%, STAND10=87%, TILT30=100%, and STAND30=93%). Sp (specificity) of the 30 bpm criterion was less at both 10 and 30 min for tilt (TILT10=40%, TILT30=20%) than stand (STAND10=67%, STAND30=53%). The optimal ΔHR to discriminate POTS at 10 min were 38 bpm (TILT) and 29 bpm (STAND), and at 30 min were 47 bpm (TILT) and 34 bpm (STAND). Orthostatic tachycardia was greater for tilt (with lower Sp for POTS diagnosis) than stand at 10 and 30 min. The 30 bpm ΔHR criterion is not suitable for 30 min tilt. Diagnosis of POTS should consider orthostatic intolerance criteria and not be based solely on orthostatic tachycardia regardless of test used.
Resolution improvement in positron emission tomography using anatomical Magnetic Resonance Imaging.
Chu, Yong; Su, Min-Ying; Mandelkern, Mark; Nalcioglu, Orhan
2006-08-01
An ideal imaging system should provide information with high-sensitivity, high spatial, and temporal resolution. Unfortunately, it is not possible to satisfy all of these desired features in a single modality. In this paper, we discuss methods to improve the spatial resolution in positron emission imaging (PET) using a priori information from Magnetic Resonance Imaging (MRI). Our approach uses an image restoration algorithm based on the maximization of mutual information (MMI), which has found significant success for optimizing multimodal image registration. The MMI criterion is used to estimate the parameters in the Sharpness-Constrained Wiener filter. The generated filter is then applied to restore PET images of a realistic digital brain phantom. The resulting restored images show improved resolution and better signal-to-noise ratio compared to the interpolated PET images. We conclude that a Sharpness-Constrained Wiener filter having parameters optimized from a MMI criterion may be useful for restoring spatial resolution in PET based on a priori information from correlated MRI.
Sainz de Baranda, Pilar; Rodríguez-Iniesta, María; Ayala, Francisco; Santonja, Fernando; Cejudo, Antonio
2014-07-01
To examine the criterion-related validity of the horizontal hip joint angle (H-HJA) test and vertical hip joint angle (V-HJA) test for estimating hamstring flexibility measured through the passive straight-leg raise (PSLR) test using contemporary statistical measures. Validity study. Controlled laboratory environment. One hundred thirty-eight professional trampoline gymnasts (61 women and 77 men). Hamstring flexibility. Each participant performed 2 trials of H-HJA, V-HJA, and PSLR tests in a randomized order. The criterion-related validity of H-HJA and V-HJA tests was measured through the estimation equation, typical error of the estimate (TEEST), validity correlation (β), and their respective confidence limits. The findings from this study suggest that although H-HJA and V-HJA tests showed moderate to high validity scores for estimating hamstring flexibility (standardized TEEST = 0.63; β = 0.80), the TEEST statistic reported for both tests was not narrow enough for clinical purposes (H-HJA = 10.3 degrees; V-HJA = 9.5 degrees). Subsequently, the predicted likely thresholds for the true values that were generated were too wide (H-HJA = predicted value ± 13.2 degrees; V-HJA = predicted value ± 12.2 degrees). The results suggest that although the HJA test showed moderate to high validity scores for estimating hamstring flexibility, the prediction intervals between the HJA and PSLR tests are not strong enough to suggest that clinicians and sport medicine practitioners should use the HJA and PSLR tests interchangeably as gold standard measurement tools to evaluate and detect short hamstring muscle flexibility.
NASA Astrophysics Data System (ADS)
Putra, Z. A. Z.; Sumarmin, R.; Violita, V.
2018-04-01
The guides used for practicing animal physiology need to be revised and adapted to the lecture material. This is because in the subject of Animal Physiology. The guidance of animal physiology practitioners is still conventional with prescription model instructions and is so simple that it is necessary to develop a practical guide that can lead to the development of scientific work. One of which is through practice guided inquiry guided practicum guide. This study aims to describe the process development of the practical guidance and reveal the validity, practicality, and effectiveness Guidance Physiology Animals guided inquiry inferior to the subject of Animal Physiology for students Biology Department State University of Padang. This type of research is development research. This development research uses the Plomp model. Stages performed are problem identification and analysis stage, prototype development and prototyping stage, and assessment phase. Data analysis using descriptive analysis. The instrument of data collection using validation and practical questionnaires, competence and affective field of competence observation and psychomotor and cognitive domain competence test. The result of this research shows that guidance of Inquiry Guided Initiative Guided Physiology with 3.23 valid category, practicality by lecturer with value 3.30 practical category, student with value 3.37 practical criterion. Affective effectiveness test with 93,00% criterion is very effective, psychomotor aspect 89,50% with very effective criteria and cognitive domain with value of 67, pass criterion. The conclusion of this research is Guided Inquiry Student Guided Protoxial Guidance For Students stated valid, practical and effective.
Salimi, Fereshteh; Shahabi, Shahab; Talebzadeh, Hamid; Keshavarzian, Amir; Pourfakharan, Mohammad; Safaei, Mansour
2017-01-01
Fistulas are the preferred permanent hemodialysis vascular access, but a significant obstacle to increasing their prevalence is the fistula's high "failure to mature" (FTM) rate. This study aimed to identify postoperative clinical characteristics that are predictive of fistula FTM. This descriptive cross-sectional study was performed on 80 end-stage renal disease patients who referred to Al Zahra Hospital, Isfahan, for brachiocephalic fistula placement. After 4 weeks, the clinical criteria (trill, firmness, vein length, and venous engorgement) examined and the fistulas situation divided to favorable or unfavorable by each criterion, and the results comprised with dialysis possibility. Data were analyzed with SPSS version 21. Diagnostic index for CLINICAL examination was calculated. Among the 80 cases, 25 (31.2%) female and 55 (68.8%) male were studied with the mean age of 51.9 (standard deviation = 17) year ranged between 18 and 86 years old. Sixty-two (77.5%) cases had successful hemodialysis. All four clinical assessments were significantly more acceptable in patients with successful dialysis ( P < 0.001). According to the results of our study, the accuracy of all physical assessments was above 70% and except vein length other criteria had a sensitivity and negative predictive value of 100%. In this study, firmness of vein has highest specificity and positive predictive value (83.9% and 64.3%, respectively). Results of our study showed that high sensitivity and relatively low specificity of the clinical criterion. It means that unfavorable results of each clinical criterion predict unfavorable dialysis. Clinical evaluation of a newly created fistula 4-6 weeks after surgery should be considered mandatory.
He, Bosheng; Gu, Jinhua; Huang, Sheng; Gao, Xuesong; Fan, Jinhe; Sheng, Meihong; Wang, Lin; Gong, Shenchu
2017-02-01
This study was performed to evaluate the diagnostic performance of multi-slice CT angiography combined with enterography in determining the cause and location of obstruction as well as intestinal ischaemia in patients with small bowel obstruction (SBO). This study retrospectively summarized the image data of 57 SBO patients who received both multi-slice CT angiography and enterography examination between December 2012 and May 2013. The CT diagnoses of SBO and intestinal ischaemia were correlated with the findings at surgery or digital subtraction angiography, which were set as standard references. Multi-slice CT angiography and enterography indicated that the cause of SBO in three patients was misjudged, suggesting a diagnostic accuracy of 94.7%. In one patient the level of obstruction was incorrect, demonstrating a diagnostic accuracy of 98.2%. Based on the results of the receiver operating characteristic (ROC) curve analysis, the diagnostic criterion for ischaemic SBO was at least two of the four CT signs (circumferential bowel wall thickening, reduced enhancement of the intestinal wall, mesenteric oedema and mesenteric vascular engorgement). The criterion yielded a sensitivity of 94.4%, a specificity of 92.3%, a positive predicted value of 85.0% and a negative predicted value of 97.3%, and the area under curve (AUC) was 0.92 (95% CI, 0.85-0.99). Multi-slice CT angiography and enterography have high diagnostic value in identifying the cause and site of SBO. In addition, the suggested diagnostic criterion using CT signs is helpful for diagnosing intestinal ischaemia in SBO patients. © 2016 The Royal Australian and New Zealand College of Radiologists.
Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús
2016-01-01
The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt's psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42-0.79), with the 1.5 mile (rp = 0.79, 0.73-0.85) and 12 min walk/run tests (rp = 0.78, 0.72-0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. When the evaluation of an individual's maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness.
Weeks, James L
2006-06-01
The Mine Safety and Health Administration (MSHA) proposes to issue citations for non-compliance with the exposure limit for respirable coal mine dust when measured exposure exceeds the exposure limit with a "high degree of confidence." This criterion threshold value (CTV) is derived from the sampling and analytical error of the measurement method. This policy is based on a combination of statistical and legal reasoning: the one-tailed 95% confidence limit of the sampling method, the apparent principle of due process and a standard of proof analogous to "beyond a reasonable doubt." This policy raises the effective exposure limit, it is contrary to the precautionary principle, it is not a fair sharing of the burden of uncertainty, and it employs an inappropriate standard of proof. Its own advisory committee and NIOSH have advised against this policy. For longwall mining sections, it results in a failure to issue citations for approximately 36% of the measured values that exceed the statutory exposure limit. Citations for non-compliance with the respirable dust standard should be issued for any measure exposure that exceeds the exposure limit.
Feature combinations and the divergence criterion
NASA Technical Reports Server (NTRS)
Decell, H. P., Jr.; Mayekar, S. M.
1976-01-01
Classifying large quantities of multidimensional remotely sensed agricultural data requires efficient and effective classification techniques and the construction of certain transformations of a dimension reducing, information preserving nature. The construction of transformations that minimally degrade information (i.e., class separability) is described. Linear dimension reducing transformations for multivariate normal populations are presented. Information content is measured by divergence.
Jaw-opening force test to screen for Dysphagia: preliminary results.
Hara, Koji; Tohara, Haruka; Wada, Satoko; Iida, Takatoshi; Ueda, Koichiro; Ansai, Toshihiro
2014-05-01
To assess the jaw-opening force test (JOFT) for dysphagia screening. Criterion standard. University dental hospital. Patients complaining of dysphagia (N=95) and with symptoms of dysphagia with chronic underlying causes (mean age ± SD, 79.3±9.61y; range, 50-94y; men: n=49; mean age ± SD, 77.03±9.81y; range, 50-94y; women: n=46; mean age ± SD, 75.42±9.73y; range, 51-93y) admitted for treatment between May 2011 and December 2012 were included. None. All patients were administered the JOFT and underwent fiberoptic endoscopic evaluation of swallowing (FEES). The mean jaw-opening strength was compared with aspiration (ASP) and pharyngeal residue observations of the FEES, which was used as the criterion standard. A receiver operating characteristic (ROC) curve analysis was performed. Forces of ≤3.2kg for men and ≤4kg for women were appropriate cutoff values for predicting ASP with a sensitivity and specificity of .57 and .79 for men and .93 and .52 for women, respectively. Based on the ROC analyses for predicting pharyngeal residue, forces of ≤5.3kg in men and ≤3.9kg in women were appropriate cutoff values, with a sensitivity and specificity of .80 and .88 for men and .83 and .81 for women, respectively. The JOFT could be a useful screening tool for predicting pharyngeal residue and could provide useful information to aid in the referral of patients for further diagnostic imaging testing. However, given its low sensitivity to ASP the JOFT should be paired with other screening tests that predict ASP. Copyright © 2014 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.
Chronic and episodic acidification of Adirondack streams from acid rain in 2003-2005
Lawrence, G.B.; Roy, K.M.; Baldigo, Barry P.; Simonin, H.A.; Capone, S.B.; Sutherland, J.W.; Nierzwicki-Bauer, S. A.; Boylen, C.W.
2008-01-01
Limited information is available on streams in the Adirondack region of New York, although streams are more prone to acidification than the more studied Adirondack lakes. A stream assessment was therefore undertaken in the Oswegatchie and Black River drainages; an area of 4585 km2 in the western part of the Adirondack region. Acidification was evaluated with the newly developed base-cation surplus (BCS) and the conventional acid-neutralizing capacity by Gran titration (ANCG). During the survey when stream water was most acidic (March 2004), 105 of 188 streams (56%) were acidified based on the criterion of BCS < 0 ??eq L-1, whereas 29% were acidified based on an ANCG value < 0 ??eq L-1. During the survey when stream water was least acidic (August 2003), 15 of 129 streams (12%) were acidified based on the criterion of BCS < 0 ??eq L-1, whereas 5% were acidified based on ANCG value < 0 ??eq L -1. The contribution of acidic deposition to stream acidification was greater than that of strongly acidic organic acids in each of the surveys by factors ranging from approximately 2 to 5, but was greatest during spring snowmelt and least during elevated base flow in August. During snowmelt, the percentage attributable to acidic deposition was 81%, whereas during the October 2003 survey, when dissolved organic carbon (DOC) concentrations were highest, this percentage was 66%. The total length of stream reaches estimated to be prone to acidification was 718 km out of a total of 1237 km of stream reaches that were assessed. Copyright ?? 2008 by the American Society of Agronomy, Crop Science Society of America, and Soil Science Society of America. All rights reserved.
Orbital Evasive Target Tracking and Sensor Management
2012-03-30
maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game theoretic criterion where...tracking with multiple space borne observers. The results indicate that the game theoretic approach is more effective than the information based approach in...sensor management is to maximize the total information gain in the observer-to-target assignment. We compare the information based approach to the game
NASA Astrophysics Data System (ADS)
Bai, F.; Gagar, D.; Foote, P.; Zhao, Y.
2017-02-01
Acoustic Emission (AE) monitoring can be used to detect the presence of damage as well as determine its location in Structural Health Monitoring (SHM) applications. Information on the time difference of the signal generated by the damage event arriving at different sensors in an array is essential in performing localisation. Currently, this is determined using a fixed threshold which is particularly prone to errors when not set to optimal values. This paper presents three new methods for determining the onset of AE signals without the need for a predetermined threshold. The performance of the techniques is evaluated using AE signals generated during fatigue crack growth and compared to the established Akaike Information Criterion (AIC) and fixed threshold methods. It was found that the 1D location accuracy of the new methods was within the range of < 1 - 7.1 % of the monitored region compared to 2.7% for the AIC method and a range of 1.8-9.4% for the conventional Fixed Threshold method at different threshold levels.
Persoskie, Alexander; Nguyen, Anh B.; Kaufman, Annette R.; Tworek, Cindy
2017-01-01
Beliefs about the relative harmfulness of one product compared to another (perceived relative harm) are central to research and regulation concerning tobacco and nicotine-containing products, but techniques for measuring such beliefs vary widely. We compared the validity of direct and indirect measures of perceived harm of e-cigarettes and smokeless tobacco (SLT) compared to cigarettes. On direct measures, participants explicitly compare the harmfulness of each product. On indirect measures, participants rate the harmfulness of each product separately, and ratings are compared. The U.S. Health Information National Trends Survey (HINTS-FDA-2015; N=3738) included direct measures of perceived harm of e-cigarettes and SLT compared to cigarettes. Indirect measures were created by comparing ratings of harm from e-cigarettes, SLT, and cigarettes on 3-point scales. Logistic regressions tested validity by assessing whether direct and indirect measures were associated with criterion variables including: ever-trying e-cigarettes, ever-trying snus, and SLT use status. Compared to the indirect measures, the direct measures of harm were more consistently associated with criterion variables. On direct measures, 26% of adults rated e-cigarettes as less harmful than cigarettes, and 11% rated SLT as less harmful than cigarettes. Direct measures appear to provide valid information about individuals’ harm beliefs, which may be used to inform research and tobacco control policy. Further validation research is encouraged. PMID:28073035
Minimal Polynomial Method for Estimating Parameters of Signals Received by an Antenna Array
NASA Astrophysics Data System (ADS)
Ermolaev, V. T.; Flaksman, A. G.; Elokhin, A. V.; Kuptsov, V. V.
2018-01-01
The effectiveness of the projection minimal polynomial method for solving the problem of determining the number of sources of signals acting on an antenna array (AA) with an arbitrary configuration and their angular directions has been studied. The method proposes estimating the degree of the minimal polynomial of the correlation matrix (CM) of the input process in the AA on the basis of a statistically validated root-mean-square criterion. Special attention is paid to the case of the ultrashort sample of the input process when the number of samples is considerably smaller than the number of AA elements, which is important for multielement AAs. It is shown that the proposed method is more effective in this case than methods based on the AIC (Akaike's Information Criterion) or minimum description length (MDL) criterion.
Suboptimal Decision Criteria Are Predicted by Subjectively Weighted Probabilities and Rewards
Ackermann, John F.; Landy, Michael S.
2014-01-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as ‘conservatism.’ We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject’s subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wcopt). Subjects’ criteria were not close to optimal relative to wcopt. The slope of SU (c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of subjects’ criteria. The slope of SU(c) was a better predictor of observers’ decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values. PMID:25366822
Suboptimal decision criteria are predicted by subjectively weighted probabilities and rewards.
Ackermann, John F; Landy, Michael S
2015-02-01
Subjects performed a visual detection task in which the probability of target occurrence at each of the two possible locations, and the rewards for correct responses for each, were varied across conditions. To maximize monetary gain, observers should bias their responses, choosing one location more often than the other in line with the varied probabilities and rewards. Typically, and in our task, observers do not bias their responses to the extent they should, and instead distribute their responses more evenly across locations, a phenomenon referred to as 'conservatism.' We investigated several hypotheses regarding the source of the conservatism. We measured utility and probability weighting functions under Prospect Theory for each subject in an independent economic choice task and used the weighting-function parameters to calculate each subject's subjective utility (SU(c)) as a function of the criterion c, and the corresponding weighted optimal criteria (wc opt ). Subjects' criteria were not close to optimal relative to wc opt . The slope of SU(c) and of expected gain EG(c) at the neutral criterion corresponding to β = 1 were both predictive of the subjects' criteria. The slope of SU(c) was a better predictor of observers' decision criteria overall. Thus, rather than behaving optimally, subjects move their criterion away from the neutral criterion by estimating how much they stand to gain by such a change based on the slope of subjective gain as a function of criterion, using inherently distorted probabilities and values.
A new criterion needed to evaluate reliability of digital protective relays
NASA Astrophysics Data System (ADS)
Gurevich, Vladimir
2012-11-01
There is a wide range of criteria and features for evaluating reliability in engineering; but as many as there are, only one of them has been chosen to evaluate reliability of Digital Protective Relays (DPR) in the technical documentation: Mean (operating) Time Between Failures (MTBF), which has gained universal currency and has been specified in technical manuals, information sheets, tender documentation as the key indicator of DPR reliability. But is the choice of this criterion indeed wise? The answer to this question is being sought by the author of this article.
Stephen R. Shifley; Francisco X. Aguilar; Nianfu Song; Susan I. Stewart; David J. Nowak; Dale D. Gormanson; W. Keith Moser; Sherri Wormstead; Eric J. Greenfield
2012-01-01
Forests provide an array of products and services that maintain and enhance benefits to our society and economy. Benefits derived from forests may be categorized into wood products, nontimber products and services, and ecosystem services. The value and volume of these products and services indicate the importance of forests for a wide variety of uses.Tracking values,...
Joseph Buongiorno; Mo Zhou; Craig Johnston
2017-01-01
Markov decision process models were extended to reflect some consequences of the risk attitude of forestry decision makers. One approach consisted of maximizing the expected value of a criterion subject to an upper bound on the variance or, symmetrically, minimizing the variance subject to a lower bound on the expected value. The other method used the certainty...
Yee, Chee-Seng; Farewell, Vernon; Isenberg, David A; Rahman, Anisur; Teh, Lee-Suan; Griffiths, Bridget; Bruce, Ian N; Ahmad, Yasmeen; Prabu, Athiveeraramapandian; Akil, Mohammed; McHugh, Neil; D'Cruz, David; Khamashta, Munther A; Maddison, Peter; Gordon, Caroline
2007-01-01
Objective To determine the construct and criterion validity of the British Isles Lupus Assessment Group 2004 (BILAG-2004) index for assessing disease activity in systemic lupus erythematosus (SLE). Methods Patients with SLE were recruited into a multicenter cross-sectional study. Data on SLE disease activity (scores on the BILAG-2004 index, Classic BILAG index, and Systemic Lupus Erythematosus Disease Activity Index 2000 [SLEDAI-2K]), investigations, and therapy were collected. Overall BILAG-2004 and overall Classic BILAG scores were determined by the highest score achieved in any of the individual systems in the respective index. Erythrocyte sedimentation rates (ESRs), C3 levels, C4 levels, anti–double-stranded DNA (anti-dsDNA) levels, and SLEDAI-2K scores were used in the analysis of construct validity, and increase in therapy was used as the criterion for active disease in the analysis of criterion validity. Statistical analyses were performed using ordinal logistic regression for construct validity and logistic regression for criterion validity. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated. Results Of the 369 patients with SLE, 92.7% were women, 59.9% were white, 18.4% were Afro-Caribbean and 18.4% were South Asian. Their mean ± SD age was 41.6 ± 13.2 years and mean disease duration was 8.8 ± 7.7 years. More than 1 assessment was obtained on 88.6% of the patients, and a total of 1,510 assessments were obtained. Increasing overall scores on the BILAG-2004 index were associated with increasing ESRs, decreasing C3 levels, decreasing C4 levels, elevated anti-dsDNA levels, and increasing SLEDAI-2K scores (all P < 0.01). Increase in therapy was observed more frequently in patients with overall BILAG-2004 scores reflecting higher disease activity. Scores indicating active disease (overall BILAG-2004 scores of A and B) were significantly associated with increase in therapy (odds ratio [OR] 19.3, P < 0.01). The BILAG-2004 and Classic BILAG indices had comparable sensitivity, specificity, PPV, and NPV. Conclusion These findings show that the BILAG-2004 index has construct and criterion validity. PMID:18050213
SU-E-T-20: A Correlation Study of 2D and 3D Gamma Passing Rates for Prostate IMRT Plans
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhang, D; Sun Yat-sen University Cancer Center, Guangzhou, Guangdong; Wang, B
2015-06-15
Purpose: To investigate the correlation between the two-dimensional gamma passing rate (2D %GP) and three-dimensional gamma passing rate (3D %GP) in prostate IMRT quality assurance. Methods: Eleven prostate IMRT plans were randomly selected from the clinical database and were used to obtain dose distributions in the phantom and patient. Three types of delivery errors (MLC bank sag errors, central MLC errors and monitor unit errors) were intentionally introduced to modify the clinical plans through an in-house Matlab program. This resulted in 187 modified plans. The 2D %GP and 3D %GP were analyzed using different dose-difference and distance-toagreement (1%-1mm, 2%-2mm andmore » 3%-3mm) and 20% dose threshold. The 2D %GP and 3D %GP were then compared not only for the whole region, but also for the PTVs and critical structures using the statistical Pearson’s correlation coefficient (γ). Results: For different delivery errors, the average comparison of 2D %GP and 3D %GP showed different conclusions. The statistical correlation coefficients between 2D %GP and 3D %GP for the whole dose distribution showed that except for 3%/3mm criterion, 2D %GP and 3D %GP of 1%/1mm criterion and 2%/2mm criterion had strong correlations (Pearson’s γ value >0.8). Compared with the whole region, the correlations of 2D %GP and 3D %GP for PTV were better (the γ value for 1%/1mm, 2%/2mm and 3%/3mm criterion was 0.959, 0.931 and 0.855, respectively). However for the rectum, there was no correlation between 2D %GP and 3D %GP. Conclusion: For prostate IMRT, the correlation between 2D %GP and 3D %GP for the PTV is better than that for normal structures. The lower dose-difference and DTA criterion shows less difference between 2D %GP and 3D %GP. Other factors such as the dosimeter characteristics and TPS algorithm bias may also influence the correlation between 2D %GP and 3D %GP.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hong, X; Gao, H; Schuemann, J
2015-06-15
Purpose: The Monte Carlo (MC) method is a gold standard for dose calculation in radiotherapy. However, it is not a priori clear how many particles need to be simulated to achieve a given dose accuracy. Prior error estimate and stopping criterion are not well established for MC. This work aims to fill this gap. Methods: Due to the statistical nature of MC, our approach is based on one-sample t-test. We design the prior error estimate method based on the t-test, and then use this t-test based error estimate for developing a simulation stopping criterion. The three major components are asmore » follows.First, the source particles are randomized in energy, space and angle, so that the dose deposition from a particle to the voxel is independent and identically distributed (i.i.d.).Second, a sample under consideration in the t-test is the mean value of dose deposition to the voxel by sufficiently large number of source particles. Then according to central limit theorem, the sample as the mean value of i.i.d. variables is normally distributed with the expectation equal to the true deposited dose.Third, the t-test is performed with the null hypothesis that the difference between sample expectation (the same as true deposited dose) and on-the-fly calculated mean sample dose from MC is larger than a given error threshold, in addition to which users have the freedom to specify confidence probability and region of interest in the t-test based stopping criterion. Results: The method is validated for proton dose calculation. The difference between the MC Result based on the t-test prior error estimate and the statistical Result by repeating numerous MC simulations is within 1%. Conclusion: The t-test based prior error estimate and stopping criterion are developed for MC and validated for proton dose calculation. Xiang Hong and Hao Gao were partially supported by the NSFC (#11405105), the 973 Program (#2015CB856000) and the Shanghai Pujiang Talent Program (#14PJ1404500)« less
The Extended Erlang-Truncated Exponential distribution: Properties and application to rainfall data.
Okorie, I E; Akpanta, A C; Ohakwe, J; Chikezie, D C
2017-06-01
The Erlang-Truncated Exponential ETE distribution is modified and the new lifetime distribution is called the Extended Erlang-Truncated Exponential EETE distribution. Some statistical and reliability properties of the new distribution are given and the method of maximum likelihood estimate was proposed for estimating the model parameters. The usefulness and flexibility of the EETE distribution was illustrated with an uncensored data set and its fit was compared with that of the ETE and three other three-parameter distributions. Results based on the minimized log-likelihood ([Formula: see text]), Akaike information criterion (AIC), Bayesian information criterion (BIC) and the generalized Cramér-von Mises [Formula: see text] statistics shows that the EETE distribution provides a more reasonable fit than the one based on the other competing distributions.
NASA Astrophysics Data System (ADS)
Kellici, Tahsin F.; Ntountaniotis, Dimitrios; Vanioti, Marianna; Golic Grdadolnik, Simona; Simcic, Mihael; Michas, Georgios; Moutevelis-Minakakis, Panagiota; Mavromoustakos, Thomas
2017-02-01
During the synthesis of new pyrrolidinone analogs possessing biological activity it is intriguing to assign their absolute stereochemistry as it is well known that drug potency is influenced by the stereochemistry. The combination of J-coupling information with theoretical results was used in order to establish their total stereochemistry when the chiral center of the starting material has known absolute stereochemistry. The J-coupling can be used as a sole criterion for novel synthetic analogs to identify the right stereochemistry. This approach is extremely useful especially in the case of analogs whose 2D NOESY spectra cannot provide this information. Few synthetic examples are given to prove the significance of this approach.
Morgado, José Mário T; Sánchez-Muñoz, Laura; Teodósio, Cristina G; Jara-Acevedo, Maria; Alvarez-Twose, Iván; Matito, Almudena; Fernández-Nuñez, Elisa; García-Montero, Andrés; Orfao, Alberto; Escribano, Luís
2012-04-01
Aberrant expression of CD2 and/or CD25 by bone marrow, peripheral blood or other extracutaneous tissue mast cells is currently used as a minor World Health Organization diagnostic criterion for systemic mastocytosis. However, the diagnostic utility of CD2 versus CD25 expression by mast cells has not been prospectively evaluated in a large series of systemic mastocytosis. Here we evaluate the sensitivity and specificity of CD2 versus CD25 expression in the diagnosis of systemic mastocytosis. Mast cells from a total of 886 bone marrow and 153 other non-bone marrow extracutaneous tissue samples were analysed by multiparameter flow cytometry following the guidelines of the Spanish Network on Mastocytosis at two different laboratories. The 'CD25+ and/or CD2+ bone marrow mast cells' World Health Organization criterion showed an overall sensitivity of 100% with 99.0% specificity for the diagnosis of systemic mastocytosis whereas CD25 expression alone presented a similar sensitivity (100%) with a slightly higher specificity (99.2%). Inclusion of CD2 did not improve the sensitivity of the test and it decreased its specificity. In tissues other than bone marrow, the mast cell phenotypic criterion revealed to be less sensitive. In summary, CD2 expression does not contribute to improve the diagnosis of systemic mastocytosis when compared with aberrant CD25 expression alone, which supports the need to update and replace the minor World Health Organization 'CD25+ and/or CD2+' mast cell phenotypic diagnostic criterion by a major criterion based exclusively on CD25 expression.
Criterion for correct recalls in associative-memory neural networks
NASA Astrophysics Data System (ADS)
Ji, Han-Bing
1992-12-01
A novel weighted outer-product learning (WOPL) scheme for associative memory neural networks (AMNNs) is presented. In the scheme, each fundamental memory is allocated a learning weight to direct its correct recall. Both the Hopfield and multiple training models are instances of the WOPL model with certain sets of learning weights. A necessary condition of choosing learning weights for the convergence property of the WOPL model is obtained through neural dynamics. A criterion for choosing learning weights for correct associative recalls of the fundamental memories is proposed. In this paper, an important parameter called signal to noise ratio gain (SNRG) is devised, and it is found out empirically that SNRGs have their own threshold values which means that any fundamental memory can be correctly recalled when its corresponding SNRG is greater than or equal to its threshold value. Furthermore, a theorem is given and some theoretical results on the conditions of SNRGs and learning weights for good associative recall performance of the WOPL model are accordingly obtained. In principle, when all SNRGs or learning weights chosen satisfy the theoretically obtained conditions, the asymptotic storage capacity of the WOPL model will grow at the greatest rate under certain known stochastic meaning for AMNNs, and thus the WOPL model can achieve correct recalls for all fundamental memories. The representative computer simulations confirm the criterion and theoretical analysis.
Rufli, Hans
2012-05-01
It has become common practice in many laboratories in Europe to introduce the criterion "moribund" to reduce the suffering in fish acute lethality tests. Fish with severe sublethal symptoms might be declared moribund and are removed from the test as soon as this occurs (premature discontinuation of experiment). Moribund fish affect main study outcomes as the median lethal concentration (LC50) derived on fish declared as moribund may be lower than the conventional LC50. This was evaluated by a retrospective analysis of 328 fish acute toxicity tests of an industry laboratory based on five different definitions of moribund, and of 111 tests from 10 other laboratories from Europe and the United States. Using the criterion of moribund 10 to 23% of the fish were being declared as moribund in 49 to 79% of the studies. In 36 to 52% of the studies, the LC50(moribund) was lower than the conventional LC50 depending on the definitions of moribund. An inclusion of the moribund criterion in an updated Organisation for Economic Cooperation and Development guideline for the acute fish toxicity test would reduce the period of suffering by up to 92 h, lowering the value of the main toxicity endpoint by a factor of approximately 2, and maximal by a factor of approximately 16. Copyright © 2012 SETAC.
Evaluation of cluster expansions and correlated one-body properties of nuclei
NASA Astrophysics Data System (ADS)
Moustakidis, Ch. C.; Massen, S. E.; Panos, C. P.; Grypeos, M. E.; Antonov, A. N.
2001-07-01
Three different cluster expansions for the evaluation of correlated one-body properties of s-p and s-d shell nuclei are compared. Harmonic oscillator wave functions and Jastrow-type correlations are used, while analytical expressions are obtained for the charge form factor, density distribution, and momentum distribution by truncating the expansions and using a standard Jastrow correlation function f. The harmonic oscillator parameter b and the correlation parameter β have been determined by a least-squares fit to the experimental charge form factors in each case. The information entropy of nuclei in position space (Sr) and momentum space (Sk) according to the three methods are also calculated. It is found that the larger the entropy sum, S=Sr+Sk (the net information content of the system), the smaller the values of χ2. This indicates that maximal S is a criterion of the quality of a given nuclear model, according to the maximum entropy principle. Only two exceptions to this rule, out of many cases examined, were found. Finally an analytic expression for the so-called ``healing'' or ``wound'' integrals is derived with the function f considered, for any state of the relative two-nucleon motion, and their values in certain cases are computed and compared.
What constitutes evidence-based patient information? Overview of discussed criteria.
Bunge, Martina; Mühlhauser, Ingrid; Steckelberg, Anke
2010-03-01
To survey quality criteria for evidence-based patient information (EBPI) and to compile the evidence for the identified criteria. Databases PubMed, Cochrane Library, PsycINFO, PSYNDEX and Education Research Information Center (ERIC) were searched to update the pool of criteria for EBPI. A subsequent search aimed to identify evidence for each criterion. Only studies on health issues with cognitive outcome measures were included. Evidence for each criterion is presented using descriptive methods. 3 systematic reviews, 24 randomized-controlled studies and 1 non-systematic review were included. Presentation of numerical data, verbal presentation of risks and diagrams, graphics and charts are based on good evidence. Content of information and meta-information, loss- and gain-framing and patient-oriented outcome measures are based on ethical guidelines. There is a lack of studies on quality of evidence, pictures and drawings, patient narratives, cultural aspects, layout, language and development process. The results of this review allow specification of EBPI and may help to advance the discourse among related disciplines. Research gaps are highlighted. Findings outline the type and extent of content of EBPI, guide the presentation of information and describe the development process. Copyright 2009 Elsevier Ireland Ltd. All rights reserved.
Rogé, Joceline; Gabaude, Catherine
2009-08-01
The goal of this study was to establish whether the deterioration of the useful visual field due to sleep deprivation and age in a screen monitoring activity could be explained by a decrease in perceptual sensitivity and/or a modification of the participant's decision criterion (two indices derived from signal detection theory). In the first experiment, a comparison of three age groups (young, middle-aged, elderly) showed that perceptual sensitivity decreased with age and that the decision criterion became more conservative. In the second experiment, measurement of the useful visual field was carried out on participants who had been deprived of sleep the previous night or had a complete night of sleep. Perceptual sensitivity significantly decreased with sleep debt, and sleep deprivation provoked an increase in the participants' decision criterion. Moreover, the comparison of two age groups (young, middle-aged) indicated that sensitivity decreased with age. The value of using these two indices to explain the deterioration of useful visual field is discussed.
Analysis of significant factors for dengue fever incidence prediction.
Siriyasatien, Padet; Phumee, Atchara; Ongruk, Phatsavee; Jampachaisri, Katechan; Kesorn, Kraisak
2016-04-16
Many popular dengue forecasting techniques have been used by several researchers to extrapolate dengue incidence rates, including the K-H model, support vector machines (SVM), and artificial neural networks (ANN). The time series analysis methodology, particularly ARIMA and SARIMA, has been increasingly applied to the field of epidemiological research for dengue fever, dengue hemorrhagic fever, and other infectious diseases. The main drawback of these methods is that they do not consider other variables that are associated with the dependent variable. Additionally, new factors correlated to the disease are needed to enhance the prediction accuracy of the model when it is applied to areas of similar climates, where weather factors such as temperature, total rainfall, and humidity are not substantially different. Such drawbacks may consequently lower the predictive power for the outbreak. The predictive power of the forecasting model-assessed by Akaike's information criterion (AIC), Bayesian information criterion (BIC), and the mean absolute percentage error (MAPE)-is improved by including the new parameters for dengue outbreak prediction. This study's selected model outperforms all three other competing models with the lowest AIC, the lowest BIC, and a small MAPE value. The exclusive use of climate factors from similar locations decreases a model's prediction power. The multivariate Poisson regression, however, effectively forecasts even when climate variables are slightly different. Female mosquitoes and seasons were strongly correlated with dengue cases. Therefore, the dengue incidence trends provided by this model will assist the optimization of dengue prevention. The present work demonstrates the important roles of female mosquito infection rates from the previous season and climate factors (represented as seasons) in dengue outbreaks. Incorporating these two factors in the model significantly improves the predictive power of dengue hemorrhagic fever forecasting models, as confirmed by AIC, BIC, and MAPE.
Villadiego, Faider Alberto Castaño; Camilo, Breno Soares; León, Victor Gomez; Peixoto, Thiago; Díaz, Edgar; Okano, Denise; Maitan, Paula; Lima, Daniel; Guimarães, Simone Facioni; Siqueira, Jeanne Broch; Pinho, Rogério
2018-01-01
Nonlinear mixed models were used to describe longitudinal scrotal circumference (SC) measurements of Nellore bulls. Models comparisons were based on Akaike’s information criterion, Bayesian information criterion, error sum of squares, adjusted R2 and percentage of convergence. Sequentially, the best model was used to compare the SC growth curve in bulls divergently classified according to SC at 18–21 months of age. For this, bulls were classified into five groups: SC < 28cm; 28cm ≤ SC < 30cm, 30cm ≤ SC < 32cm, 32cm ≤ SC < 34cm and SC ≥ 34cm. Michaelis-Menten model showed the best fit according to the mentioned criteria. In this model, β1 is the asymptotic SC value and β2 represents the time to half-final growth and may be related to sexual precocity. Parameters of the individual estimated growth curves were used to create a new dataset to evaluate the effect of the classification, farms, and year of birth on β1 and β2 parameters. Bulls of the largest SC group presented a larger predicted SC along all analyzed periods; nevertheless, smaller SC group showed predicted SC similar to intermediate SC groups (28cm ≤ SC < 32cm), around 1200 days of age. In this context, bulls classified as improper for reproduction at 18–21 months old can reach a similar condition to those considered as good condition. In terms of classification at 18–21 months, asymptotic SC was similar among groups, farms and years; however, β2 differed among groups indicating that differences in growth curves are related to sexual precocity. In summary, it seems that selection based on SC at too early ages may lead to discard bulls with suitable reproductive potential. PMID:29494597
Medicine, methodology, and values: trade-offs in clinical science and practice.
Ho, Vincent K Y
2011-01-01
The current guidelines of evidence-based medicine (EBM) presuppose that clinical research and clinical practice should advance from rigorous scientific tests as they generate reliable, value-free knowledge. Under this presupposition, hypotheses postulated by doctors and patients in the process of their decision making are preferably tested in randomized clinical trials (RCTs), and in systematic reviews and meta-analyses summarizing outcomes from multiple RCTs. Since testing under this scheme is predominantly focused on the criteria of generality and precision achieved through methodological rigor, at the cost of the criterion of realism, translating test results to clinical practice is often problematic. Choices concerning which methodological criteria should have priority are inevitable, however, as clinical trials, and scientific research in general, cannot meet all relevant criteria at the same time. Since these choices may be informed by considerations external to science, we must acknowledge that science cannot be value-free in a strict sense, and this invites a more prominent role for value-laden considerations in evaluating clinical research. The urgency for this becomes even more apparent when we consider the important yet implicit role of scientific theories in EBM, which may also be subjected to methodological evaluation and for which selectiveness in methodological focus is likewise inevitable.
Genome-Wide Meta-Analysis of Longitudinal Alcohol Consumption Across Youth and Early Adulthood.
Adkins, Daniel E; Clark, Shaunna L; Copeland, William E; Kennedy, Martin; Conway, Kevin; Angold, Adrian; Maes, Hermine; Liu, Youfang; Kumar, Gaurav; Erkanli, Alaattin; Patkar, Ashwin A; Silberg, Judy; Brown, Tyson H; Fergusson, David M; Horwood, L John; Eaves, Lindon; van den Oord, Edwin J C G; Sullivan, Patrick F; Costello, E J
2015-08-01
The public health burden of alcohol is unevenly distributed across the life course, with levels of use, abuse, and dependence increasing across adolescence and peaking in early adulthood. Here, we leverage this temporal patterning to search for common genetic variants predicting developmental trajectories of alcohol consumption. Comparable psychiatric evaluations measuring alcohol consumption were collected in three longitudinal community samples (N=2,126, obs=12,166). Consumption-repeated measurements spanning adolescence and early adulthood were analyzed using linear mixed models, estimating individual consumption trajectories, which were then tested for association with Illumina 660W-Quad genotype data (866,099 SNPs after imputation and QC). Association results were combined across samples using standard meta-analysis methods. Four meta-analysis associations satisfied our pre-determined genome-wide significance criterion (FDR<0.1) and six others met our 'suggestive' criterion (FDR<0.2). Genome-wide significant associations were highly biological plausible, including associations within GABA transporter 1, SLC6A1 (solute carrier family 6, member 1), and exonic hits in LOC100129340 (mitofusin-1-like). Pathway analyses elaborated single marker results, indicating significant enriched associations to intuitive biological mechanisms, including neurotransmission, xenobiotic pharmacodynamics, and nuclear hormone receptors (NHR). These findings underscore the value of combining longitudinal behavioral data and genome-wide genotype information in order to study developmental patterns and improve statistical power in genomic studies.
The ethical duty to preserve the quality of scientific information
NASA Astrophysics Data System (ADS)
Arattano, Massimo; Gatti, Albertina; Eusebio, Elisa
2016-04-01
The commitment to communicate and divulge the knowledge acquired during his/her professional activity is certainly one of the ethical duties of the geologist. However nowadays, in the Internet era, the spreading of knowledge involves potential risks that the geologist should be aware of. These risks require a careful analysis aimed to mitigate their effects. The Internet may in fact contribute to spread (e.g. through websites like Wikipedia) information badly or even incorrectly presented. The final result could be an impediment to the diffusion of knowledge and a reduction of its effectiveness, which is precisely the opposite of the goal that a geologist should pursue. Specific criteria aimed to recognize incorrect or inadequate information would be, therefore, extremely useful. Their development and application might avoid, or at least reduce, the above mentioned risk. Ideally, such criteria could be also used to develop specific algorithms to automatically verify the quality of information available all over the Internet. A possible criterion will be here presented for the quality control of knowledge and scientific information. An example of its application in the field of geology will be provided, to verify and correct a piece of information available on the Internet. The proposed criterion could be also used for the simplification of the scientific information and the increase of its informative efficacy.
Rogeberg, Ole; Bergsvik, Daniel; Phillips, Lawrence D; van Amsterdam, Jan; Eastwood, Niamh; Henderson, Graeme; Lynskey, Micheal; Measham, Fiona; Ponton, Rhys; Rolles, Steve; Schlag, Anne Katrin; Taylor, Polly; Nutt, David
2018-02-16
Drug policy, whether for legal or illegal substances, is a controversial field that encompasses many complex issues. Policies can have effects on a myriad of outcomes and stakeholders differ in the outcomes they consider and value, while relevant knowledge on policy effects is dispersed across multiple research disciplines making integrated judgements difficult. Experts on drug harms, addiction, criminology and drug policy were invited to a decision conference to develop a multi-criterion decision analysis (MCDA) model for appraising alternative regulatory regimes. Participants collectively defined regulatory regimes and identified outcome criteria reflecting ethical and normative concerns. For cannabis and alcohol separately, participants evaluated each regulatory regime on each criterion and weighted the criteria to provide summary scores for comparing different regimes. Four generic regulatory regimes were defined: absolute prohibition, decriminalisation, state control and free market. Participants also identified 27 relevant criteria which were organised into seven thematically related clusters. State control was the preferred regime for both alcohol and cannabis. The ranking of the regimes was robust to variations in the criterion-specific weights. The MCDA process allowed the participants to deconstruct complex drug policy issues into a set of simpler judgements that led to consensus about the results. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Health Effects Assessment for Acenaphthene
Because of the lack of data for the carcinogenicity and threshold toxicity of acenaphthene risk assessment values cannot be derived. The ambient water quality criterion of 0.2 mg/l is based on organoleptic data, which has no known relationship to potential human health effects. A...
A Weight of Evidence Framework for Environmental Assessments: Inferring Quantities
Environmental assessments require the generation of quantitative parameters such as degradation rates and assessment products may be quantities such as criterion values or magnitudes of effects. When multiple data sets or outputs of multiple models are available, it may be appro...
Yu, Rongjie; Abdel-Aty, Mohamed
2013-07-01
The Bayesian inference method has been frequently adopted to develop safety performance functions. One advantage of the Bayesian inference is that prior information for the independent variables can be included in the inference procedures. However, there are few studies that discussed how to formulate informative priors for the independent variables and evaluated the effects of incorporating informative priors in developing safety performance functions. This paper addresses this deficiency by introducing four approaches of developing informative priors for the independent variables based on historical data and expert experience. Merits of these informative priors have been tested along with two types of Bayesian hierarchical models (Poisson-gamma and Poisson-lognormal models). Deviance information criterion (DIC), R-square values, and coefficients of variance for the estimations were utilized as evaluation measures to select the best model(s). Comparison across the models indicated that the Poisson-gamma model is superior with a better model fit and it is much more robust with the informative priors. Moreover, the two-stage Bayesian updating informative priors provided the best goodness-of-fit and coefficient estimation accuracies. Furthermore, informative priors for the inverse dispersion parameter have also been introduced and tested. Different types of informative priors' effects on the model estimations and goodness-of-fit have been compared and concluded. Finally, based on the results, recommendations for future research topics and study applications have been made. Copyright © 2013 Elsevier Ltd. All rights reserved.
Predictability of Seasonal Rainfall over the Greater Horn of Africa
NASA Astrophysics Data System (ADS)
Ngaina, J. N.
2016-12-01
The El Nino-Southern Oscillation (ENSO) is a primary mode of climate variability in the Greater of Africa (GHA). The expected impacts of climate variability and change on water, agriculture, and food resources in GHA underscore the importance of reliable and accurate seasonal climate predictions. The study evaluated different model selection criteria which included the Coefficient of determination (R2), Akaike's Information Criterion (AIC), Bayesian Information Criterion (BIC), and the Fisher information approximation (FIA). A forecast scheme based on the optimal model was developed to predict the October-November-December (OND) and March-April-May (MAM) rainfall. The predictability of GHA rainfall based on ENSO was quantified based on composite analysis, correlations and contingency tables. A test for field-significance considering the properties of finiteness and interdependence of the spatial grid was applied to avoid correlations by chance. The study identified FIA as the optimal model selection criterion. However, complex model selection criteria (FIA followed by BIC) performed better compared to simple approach (R2 and AIC). Notably, operational seasonal rainfall predictions over the GHA makes of simple model selection procedures e.g. R2. Rainfall is modestly predictable based on ENSO during OND and MAM seasons. El Nino typically leads to wetter conditions during OND and drier conditions during MAM. The correlations of ENSO indices with rainfall are statistically significant for OND and MAM seasons. Analysis based on contingency tables shows higher predictability of OND rainfall with the use of ENSO indices derived from the Pacific and Indian Oceans sea surfaces showing significant improvement during OND season. The predictability based on ENSO for OND rainfall is robust on a decadal scale compared to MAM. An ENSO-based scheme based on an optimal model selection criterion can thus provide skillful rainfall predictions over GHA. This study concludes that the negative phase of ENSO (La Niña) leads to dry conditions while the positive phase of ENSO (El Niño) anticipates enhanced wet conditions
ERIC Educational Resources Information Center
Just, David A.; Wircenski, Jerry L.
A study of female delinquent behavior used as data responses from approximately 4,000 15- to 17-year old civilian noninstitutionized youth who participated in the 1980 New Youth Survey of the National Longitudinal Surveys of Labor Market Experience. Three criterion variables were used: work values, occupational aspirations, and labor force status.…
Performance index and meta-optimization of a direct search optimization method
NASA Astrophysics Data System (ADS)
Krus, P.; Ölvander, J.
2013-10-01
Design optimization is becoming an increasingly important tool for design, often using simulation as part of the evaluation of the objective function. A measure of the efficiency of an optimization algorithm is of great importance when comparing methods. The main contribution of this article is the introduction of a singular performance criterion, the entropy rate index based on Shannon's information theory, taking both reliability and rate of convergence into account. It can also be used to characterize the difficulty of different optimization problems. Such a performance criterion can also be used for optimization of the optimization algorithms itself. In this article the Complex-RF optimization method is described and its performance evaluated and optimized using the established performance criterion. Finally, in order to be able to predict the resources needed for optimization an objective function temperament factor is defined that indicates the degree of difficulty of the objective function.
Development of a percutaneous penetration predictive model by SR-FTIR.
Jungman, E; Laugel, C; Rutledge, D N; Dumas, P; Baillet-Guffroy, A
2013-01-30
This work focused on developing a new evaluation criterion of percutaneous penetration, in complement to Log Pow and MW and based on high spatial resolution Fourier transformed infrared (FTIR) microspectroscopy with a synchrotron source (SR-FTIR). Classic Franz cell experiments were run and after 22 h molecule distribution in skin was determined either by HPLC or by SR-FTIR. HPLC data served as reference. HPLC and SR-FTIR results were compared and a new predictive criterion based from SR-FTIR results, named S(index), was determined using a multi-block data analysis technique (ComDim). A predictive cartography of the distribution of molecules in the skin was built and compared to OECD predictive cartography. This new criterion S(index) and the cartography using SR-FTIR/HPLC results provides relevant information for risk analysis regarding prediction of percutaneous penetration and could be used to build a new mathematical model. Copyright © 2012 Elsevier B.V. All rights reserved.
A new perspective on trait differences between native and invasive exotic plants.
Leffler, A Joshua; James, Jeremy J; Monaco, Thomas A; Sheley, Roger L
2014-02-01
Functional differences between native and exotic species potentially constitute one factor responsible for plant invasion. Differences in trait values between native and exotic invasive species, however, should not be considered fixed and may depend on the context of the comparison. Furthermore, the magnitude of difference between native and exotic species necessary to trigger invasion is unknown. We propose a criterion that differences in trait values between a native and exotic invasive species must be greater than differences between co-occurring natives for this difference to be ecologically meaningful and a contributing factor to plant invasion. We used a meta-analysis to quantify the difference between native and exotic invasive species for various traits examined in previous studies and compared this value to differences among native species reported in the same studies. The effect size between native and exotic invasive species was similar to the effect size between co-occurring natives except for studies conducted in the field; in most instances, our criterion was not met although overall differences between native and exotic invasive species were slightly larger than differences between natives. Consequently, trait differences may be important in certain contexts, but other mechanisms of invasion are likely more important in most cases. We suggest that using trait values as predictors of invasion will be challenging.
Development and validation of a parent-report measure for detection of cognitive delay in infancy.
Schafer, Graham; Genesoni, Lucia; Boden, Greg; Doll, Helen; Jones, Rosamond A K; Gray, Ron; Adams, Eleri; Jefferson, Ros
2014-12-01
To develop a brief, parent-completed instrument (ERIC - Early Report by Infant Caregivers) for detection of cognitive delay in 10- to 24-month-olds born preterm, or of low birthweight, or with perinatal complications, and to establish ERIC's diagnostic properties. Scores for ERIC were collected from the parents of 317 children meeting ≥inclusion criterion (birthweight <1500 g, gestational age <34 completed weeks, 5 min Apgar score <7, or presence of hypoxic-ischaemic encephalopathy) and no exclusion criteria. Children were assessed using a criterion score of below 80 on the Bayley Scales of Infant and Toddler Development-III cognitive scale. Items were retained according to their individual associations with delay. Sensitivity, specificity, and positive and negative predictive values were estimated and a truncated ERIC was developed for use in children <14 months old. ERIC correctly detected developmental delay in 17 out of 18 children in the sample, with 94.4% sensitivity, 76.9% specificity, 19.8% positive predictive value, 99.6% negative predictive value, 4.09 likelihood ratio positive, and 0.07 likelihood ratio negative. ERIC has potential value as a quickly administered diagnostic instrument for the absence of early cognitive delay in 10- to 24-month-old preterm infants and as a screen for cognitive delay. © 2014 Mac Keith Press.
Le Pen, C; Priol, G; Lilliu, H
2003-01-01
The criteria for the registration of new drugs may differ from the criteria for drug reimbursement. In 2000 the French government entrusted the French Medicines Agency with determining the "medical service rendered" (MSR) for each reimbursable drug. The goal was to determine which drugs could be classified with an "insufficient" MSR and therefore should be taken out of the scope of health insurance. We analyze the concepts and methods used for this evaluation and the kind of results that are obtained. We collected data on the result of MSR classification and the criteria used to perform this classification (efficacy-security, severity of the disease,place in the therapeutic strategy, existence of therapeutic alternative, public health value) for a sample of 1453 drugs belonging to five therapeutic areas. We used statistical analysis to determine what were the most influential criteria. Only two criteria - efficacy and disease severity - suffice to very largely explain the MSR classification. The other criteria contribute little added value. Some of these criteria clearly suffer from a lack of clarification, leading to different interpretations according to therapeutic class or even according to drug or drug family. The evaluation procedure differs between therapeutic classes, at least at intermediate MSR levels. Analysis of the French experience with MSR shows that the evaluation procedure has not succeeded in completely breaking away from the traditional logic of the marketing authorization and registration, as witnessed by the importance of the "efficacy/safety" criterion, the absence of an economic criterion, and the vagueness of the "public health value" criterion, which one would have thought would instead be decisive.
Wagner, Monika; Samaha, Dima; Khoury, Hanane; O'Neil, William M; Lavoie, Louis; Bennetts, Liga; Badgley, Danielle; Gabriel, Sylvie; Berthon, Anthony; Dolan, James; Kulke, Matthew H; Goetghebeur, Mireille
2018-01-01
Well- or moderately differentiated gastroenteropancreatic neuroendocrine tumors (GEP-NETs) are often slow-growing, and some patients with unresectable, asymptomatic, non-functioning tumors may face the choice between watchful waiting (WW), or somatostatin analogues (SSA) to delay progression. We developed a comprehensive multi-criteria decision analysis (MCDA) framework to help patients and physicians clarify their values and preferences, consider each decision criterion, and support communication and shared decision-making. The framework was adapted from a generic MCDA framework (EVIDEM) with patient and clinician input. During a workshop, patients and clinicians expressed their individual values and preferences (criteria weights) and, on the basis of two scenarios (treatment vs WW; SSA-1 [lanreotide] vs SSA-2 [octreotide]) with evidence from a literature review, expressed how consideration of each criterion would impact their decision in favor of either option (score), and shared their knowledge and insights verbally and in writing. The framework included benefit-risk criteria and modulating factors, such as disease severity, quality of evidence, costs, and constraints. Overall and progression-free survival being most important, criteria weights ranged widely, highlighting variations in individual values and the need to share them. Scoring and considering each criterion prompted a rich exchange of perspectives and uncovered individual assumptions and interpretations. At the group level, type of benefit, disease severity, effectiveness, and quality of evidence favored treatment; cost aspects favored WW (scenario 1). For scenario 2, most criteria did not favor either option. Patients and clinicians consider many aspects in decision-making. The MCDA framework provided a common interpretive frame to structure this complexity, support individual reflection, and share perspectives. Ipsen Pharma.
2013-01-01
Background Treatment duration varies with the type of therapy and a patient’s recovery speed. Including such a variation in randomized controlled trials (RCTs) enables comparison of the actual therapeutic potential of different therapies in clinical care. An index, Treatment Duration Control (TDC) of outcome scores was developed to help decide when to end treatment and also to determine treatment outcome by a blinded assessor. In contrast to traditional Routine Outcome Monitoring which considers raw score changes, TDC uses relative change. Methods Our theory shows that if a patient with the largest baseline scores in a sample requires a relative decrease by treatment factor T to reach a zone of low score values (functional status), any patient with smaller baselines will attain functional status with T. Furthermore, the end score values are proportional to the baseline. These characteristics concur with findings from the literature that a patient’s assessment of ‘much improved’ following treatment (related to attaining functional status) is associated with a particular relative decrease in pain intensity yielding a final pain intensity that is proportional to the baseline. Regarding the TDC-procedure: those patient’s scores that were related to pronounced signs and symptoms, were selected for adaptive testing (reference scores). A Contrast-value was determined for each reference score between its reference level and a subsequent level, and averaging all Contrast-values yielded TDC. A cut-off point related to factor T for attaining functional status, was the TDC-criterion to end a patient’s treatment as being successful. The use of TDC has been illustrated in RCT data from 118 chronic pain patients with myogenous Temporomandibular Disorders, and the TDC-criterion was validated. Results The TDC-criterion of successful/unsuccessful treatment approximated the cut-off separating two patient subgroups in a bimodal post-treatment distribution of TDC-values. Pain intensity decreased to residual levels and Health-Related Quality of Life (HRQoL) increased to normal levels, following successful treatment according to TDC. The post-treatment TDC-values were independent from the baseline values of pain intensity or HRQoL, and thus independent from the patient’s baseline severity of myogenous Temporomandibular Disorders. Conclusions TDC enables RCTs that have a variable therapy- and patient-specific duration. PMID:24112821
Emitter Number Estimation by the General Information Theoretic Criterion from Pulse Trains
2002-12-01
negative log likelihood function plus a penalty function. The general information criteria by Yin and Krishnaiah [11] are different from the regular...548-551, Victoria, BC, Canada, March 1999 DRDC Ottawa TR 2002-156 11 11. L. Zhao, P. P. Krishnaiah and Z. Bai, “On some nonparametric methods for
NASA Astrophysics Data System (ADS)
Darmon, David
2018-03-01
In the absence of mechanistic or phenomenological models of real-world systems, data-driven models become necessary. The discovery of various embedding theorems in the 1980s and 1990s motivated a powerful set of tools for analyzing deterministic dynamical systems via delay-coordinate embeddings of observations of their component states. However, in many branches of science, the condition of operational determinism is not satisfied, and stochastic models must be brought to bear. For such stochastic models, the tool set developed for delay-coordinate embedding is no longer appropriate, and a new toolkit must be developed. We present an information-theoretic criterion, the negative log-predictive likelihood, for selecting the embedding dimension for a predictively optimal data-driven model of a stochastic dynamical system. We develop a nonparametric estimator for the negative log-predictive likelihood and compare its performance to a recently proposed criterion based on active information storage. Finally, we show how the output of the model selection procedure can be used to compare candidate predictors for a stochastic system to an information-theoretic lower bound.
Optimization of rainfall networks using information entropy and temporal variability analysis
NASA Astrophysics Data System (ADS)
Wang, Wenqi; Wang, Dong; Singh, Vijay P.; Wang, Yuankun; Wu, Jichun; Wang, Lachun; Zou, Xinqing; Liu, Jiufu; Zou, Ying; He, Ruimin
2018-04-01
Rainfall networks are the most direct sources of precipitation data and their optimization and evaluation are essential and important. Information entropy can not only represent the uncertainty of rainfall distribution but can also reflect the correlation and information transmission between rainfall stations. Using entropy this study performs optimization of rainfall networks that are of similar size located in two big cities in China, Shanghai (in Yangtze River basin) and Xi'an (in Yellow River basin), with respect to temporal variability analysis. Through an easy-to-implement greedy ranking algorithm based on the criterion called, Maximum Information Minimum Redundancy (MIMR), stations of the networks in the two areas (each area is further divided into two subareas) are ranked during sliding inter-annual series and under different meteorological conditions. It is found that observation series with different starting days affect the ranking, alluding to the temporal variability during network evaluation. We propose a dynamic network evaluation framework for considering temporal variability, which ranks stations under different starting days with a fixed time window (1-year, 2-year, and 5-year). Therefore, we can identify rainfall stations which are temporarily of importance or redundancy and provide some useful suggestions for decision makers. The proposed framework can serve as a supplement for the primary MIMR optimization approach. In addition, during different periods (wet season or dry season) the optimal network from MIMR exhibits differences in entropy values and the optimal network from wet season tended to produce higher entropy values. Differences in spatial distribution of the optimal networks suggest that optimizing the rainfall network for changing meteorological conditions may be more recommended.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories
NASA Astrophysics Data System (ADS)
Hajdziona, Marta; Molski, Andrzej
2011-02-01
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 103 photons. When the intensity levels are well-separated and 104 photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
Decision making by urgency gating: theory and experimental support.
Thura, David; Beauregard-Racine, Julie; Fradet, Charles-William; Cisek, Paul
2012-12-01
It is often suggested that decisions are made when accumulated sensory information reaches a fixed accuracy criterion. This is supported by many studies showing a gradual build up of neural activity to a threshold. However, the proposal that this build up is caused by sensory accumulation is challenged by findings that decisions are based on information from a time window much shorter than the build-up process. Here, we propose that in natural conditions where the environment can suddenly change, the policy that maximizes reward rate is to estimate evidence by accumulating only novel information and then compare the result to a decreasing accuracy criterion. We suggest that the brain approximates this policy by multiplying an estimate of sensory evidence with a motor-related urgency signal and that the latter is primarily responsible for neural activity build up. We support this hypothesis using human behavioral data from a modified random-dot motion task in which motion coherence changes during each trial.
Ghisi, Gabriela Lima de Melo; Dos Santos, Rafaella Zulianello; Bonin, Christiani Batista Decker; Roussenq, Suellen; Grace, Sherry L; Oh, Paul; Benetti, Magnus
2014-01-01
To translate, culturally adapt and psychometrically validate the Information Needs in Cardiac Rehabilitation (INCR) tool to Portuguese. The identification of information needs is considered the first step to improve knowledge that ultimately could improve health outcomes. The Portuguese version generated was tested in 300 cardiac rehabilitation patients (CR) (34% women; mean age = 61.3 ± 2.1 years old). Test-retest reliability was assessed using intraclass correlation coefficient (ICC), the internal consistency using Cronbach's alpha, and the criterion validity was assessed with regard to patients' education and duration in CR. All 9 subscales were considered internally consistent (á > 0.7). Significant differences between mean total needs and educational level (p < 0.05) and duration in CR (p = 0.03) supported criterion validity. The overall mean (4.6 ± 0.4), as well as the means of the 9 subscales were high (emergency/safety was the greatest need). The Portuguese INCR was demonstrated to have sufficient reliability, consistency and validity. Copyright © 2014 Elsevier Inc. All rights reserved.
42 CFR 493.923 - Syphilis serology.
Code of Federal Regulations, 2013 CFR
2013-10-01
... a laboratory's response for qualitative and quantitative syphilis tests, the program must compare... under paragraphs (b)(2) and (b)(3) of this section. (2) For quantitative syphilis tests, the program... quantitative syphilis serology tests is the target value ±1 dilution. (3) The criterion for acceptable...
42 CFR 493.923 - Syphilis serology.
Code of Federal Regulations, 2012 CFR
2012-10-01
... a laboratory's response for qualitative and quantitative syphilis tests, the program must compare... under paragraphs (b)(2) and (b)(3) of this section. (2) For quantitative syphilis tests, the program... quantitative syphilis serology tests is the target value ±1 dilution. (3) The criterion for acceptable...
42 CFR 493.923 - Syphilis serology.
Code of Federal Regulations, 2014 CFR
2014-10-01
... a laboratory's response for qualitative and quantitative syphilis tests, the program must compare... under paragraphs (b)(2) and (b)(3) of this section. (2) For quantitative syphilis tests, the program... quantitative syphilis serology tests is the target value ±1 dilution. (3) The criterion for acceptable...
42 CFR 493.923 - Syphilis serology.
Code of Federal Regulations, 2010 CFR
2010-10-01
... a laboratory's response for qualitative and quantitative syphilis tests, the program must compare... under paragraphs (b)(2) and (b)(3) of this section. (2) For quantitative syphilis tests, the program... quantitative syphilis serology tests is the target value ±1 dilution. (3) The criterion for acceptable...
42 CFR 493.923 - Syphilis serology.
Code of Federal Regulations, 2011 CFR
2011-10-01
... a laboratory's response for qualitative and quantitative syphilis tests, the program must compare... under paragraphs (b)(2) and (b)(3) of this section. (2) For quantitative syphilis tests, the program... quantitative syphilis serology tests is the target value ±1 dilution. (3) The criterion for acceptable...
A Criterion to Control Nonlinear Error in the Mixed-Mode Bending Test
NASA Technical Reports Server (NTRS)
Reeder, James R.
2002-01-01
The mixed-mode bending test ha: been widely used to measure delamination toughness and was recently standardized by ASTM as Standard Test Method D6671-01. This simple test is a combination of the standard Mode I (opening) test and a Mode II (sliding) test. This test uses a unidirectional composite test specimen with an artificial delamination subjected to bending loads to characterize when a delamination will extend. When the displacements become large, the linear theory used to analyze the results of the test yields errors in the calcu1ated toughness values. The current standard places no limit on the specimen loading and therefore test data can be created using the standard that are significantly in error. A method of limiting the error that can be incurred in the calculated toughness values is needed. In this paper, nonlinear models of the MMB test are refined. One of the nonlinear models is then used to develop a simple criterion for prescribing conditions where thc nonlinear error will remain below 5%.
A Model-Free Diagnostic for Single-Peakedness of Item Responses Using Ordered Conditional Means.
Polak, Marike; de Rooij, Mark; Heiser, Willem J
2012-09-01
In this article we propose a model-free diagnostic for single-peakedness (unimodality) of item responses. Presuming a unidimensional unfolding scale and a given item ordering, we approximate item response functions of all items based on ordered conditional means (OCM). The proposed OCM methodology is based on Thurstone & Chave's (1929) criterion of irrelevance, which is a graphical, exploratory method for evaluating the "relevance" of dichotomous attitude items. We generalized this criterion to graded response items and quantified the relevance by fitting a unimodal smoother. The resulting goodness-of-fit was used to determine item fit and aggregated scale fit. Based on a simulation procedure, cutoff values were proposed for the measures of item fit. These cutoff values showed high power rates and acceptable Type I error rates. We present 2 applications of the OCM method. First, we apply the OCM method to personality data from the Developmental Profile; second, we analyze attitude data collected by Roberts and Laughlin (1996) concerning opinions of capital punishment.
Combined Optimal Control System for excavator electric drive
NASA Astrophysics Data System (ADS)
Kurochkin, N. S.; Kochetkov, V. P.; Platonova, E. V.; Glushkin, E. Y.; Dulesov, A. S.
2018-03-01
The article presents a synthesis of the combined optimal control algorithms of the AC drive rotation mechanism of the excavator. Synthesis of algorithms consists in the regulation of external coordinates - based on the theory of optimal systems and correction of the internal coordinates electric drive using the method "technical optimum". The research shows the advantage of optimal combined control systems for the electric rotary drive over classical systems of subordinate regulation. The paper presents a method for selecting the optimality criterion of coefficients to find the intersection of the range of permissible values of the coordinates of the control object. There is possibility of system settings by choosing the optimality criterion coefficients, which allows one to select the required characteristics of the drive: the dynamic moment (M) and the time of the transient process (tpp). Due to the use of combined optimal control systems, it was possible to significantly reduce the maximum value of the dynamic moment (M) and at the same time - reduce the transient time (tpp).
Task-Driven Dictionary Learning Based on Mutual Information for Medical Image Classification.
Diamant, Idit; Klang, Eyal; Amitai, Michal; Konen, Eli; Goldberger, Jacob; Greenspan, Hayit
2017-06-01
We present a novel variant of the bag-of-visual-words (BoVW) method for automated medical image classification. Our approach improves the BoVW model by learning a task-driven dictionary of the most relevant visual words per task using a mutual information-based criterion. Additionally, we generate relevance maps to visualize and localize the decision of the automatic classification algorithm. These maps demonstrate how the algorithm works and show the spatial layout of the most relevant words. We applied our algorithm to three different tasks: chest x-ray pathology identification (of four pathologies: cardiomegaly, enlarged mediastinum, right consolidation, and left consolidation), liver lesion classification into four categories in computed tomography (CT) images and benign/malignant clusters of microcalcifications (MCs) classification in breast mammograms. Validation was conducted on three datasets: 443 chest x-rays, 118 portal phase CT images of liver lesions, and 260 mammography MCs. The proposed method improves the classical BoVW method for all tested applications. For chest x-ray, area under curve of 0.876 was obtained for enlarged mediastinum identification compared to 0.855 using classical BoVW (with p-value 0.01). For MC classification, a significant improvement of 4% was achieved using our new approach (with p-value = 0.03). For liver lesion classification, an improvement of 6% in sensitivity and 2% in specificity were obtained (with p-value 0.001). We demonstrated that classification based on informative selected set of words results in significant improvement. Our new BoVW approach shows promising results in clinically important domains. Additionally, it can discover relevant parts of images for the task at hand without explicit annotations for training data. This can provide computer-aided support for medical experts in challenging image analysis tasks.
Development of a parameter optimization technique for the design of automatic control systems
NASA Technical Reports Server (NTRS)
Whitaker, P. H.
1977-01-01
Parameter optimization techniques for the design of linear automatic control systems that are applicable to both continuous and digital systems are described. The model performance index is used as the optimization criterion because of the physical insight that can be attached to it. The design emphasis is to start with the simplest system configuration that experience indicates would be practical. Design parameters are specified, and a digital computer program is used to select that set of parameter values which minimizes the performance index. The resulting design is examined, and complexity, through the use of more complex information processing or more feedback paths, is added only if performance fails to meet operational specifications. System performance specifications are assumed to be such that the desired step function time response of the system can be inferred.
Kinetics of Methane Production from Swine Manure and Buffalo Manure.
Sun, Chen; Cao, Weixing; Liu, Ronghou
2015-10-01
The degradation kinetics of swine and buffalo manure for methane production was investigated. Six kinetic models were employed to describe the corresponding experimental data. These models were evaluated by two statistical measurements, which were root mean square prediction error (RMSPE) and Akaike's information criterion (AIC). The results showed that the logistic and Fitzhugh models could predict the experimental data very well for the digestion of swine and buffalo manure, respectively. The predicted methane yield potential for swine and buffalo manure was 487.9 and 340.4 mL CH4/g volatile solid (VS), respectively, which was close to experimental values, when the digestion temperature was 36 ± 1 °C in the biochemical methane potential assays. Besides, the rate constant revealed that swine manure had a much faster methane production rate than buffalo manure.
Observational constraints on cosmological models with Chaplygin gas and quadratic equation of state
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sharov, G.S., E-mail: german.sharov@mail.ru
Observational manifestations of accelerated expansion of the universe, in particular, recent data for Type Ia supernovae, baryon acoustic oscillations, for the Hubble parameter H ( z ) and cosmic microwave background constraints are described with different cosmological models. We compare the ΛCDM, the models with generalized and modified Chaplygin gas and the model with quadratic equation of state. For these models we estimate optimal model parameters and their permissible errors with different approaches to calculation of sound horizon scale r {sub s} ( z {sub d} ). Among the considered models the best value of χ{sup 2} is achieved formore » the model with quadratic equation of state, but it has 2 additional parameters in comparison with the ΛCDM and therefore is not favored by the Akaike information criterion.« less
A survey of kernel-type estimators for copula and their applications
NASA Astrophysics Data System (ADS)
Sumarjaya, I. W.
2017-10-01
Copulas have been widely used to model nonlinear dependence structure. Main applications of copulas include areas such as finance, insurance, hydrology, rainfall to name but a few. The flexibility of copula allows researchers to model dependence structure beyond Gaussian distribution. Basically, a copula is a function that couples multivariate distribution functions to their one-dimensional marginal distribution functions. In general, there are three methods to estimate copula. These are parametric, nonparametric, and semiparametric method. In this article we survey kernel-type estimators for copula such as mirror reflection kernel, beta kernel, transformation method and local likelihood transformation method. Then, we apply these kernel methods to three stock indexes in Asia. The results of our analysis suggest that, albeit variation in information criterion values, the local likelihood transformation method performs better than the other kernel methods.
Zozaya, Néboa; Martínez-Galdeano, Lucía; Alcalá, Bleric; Armario-Hita, Jose Carlos; Carmona, Concepción; Carrascosa, Jose Manuel; Herranz, Pedro; Lamas, María Jesús; Trapero-Bertran, Marta; Hidalgo-Vega, Álvaro
2018-06-01
Multi-criteria decision analysis (MCDA) is a tool that systematically considers multiple factors relevant to health decision-making. The aim of this study was to use an MCDA to assess the value of dupilumab for severe atopic dermatitis compared with secukinumab for moderate to severe plaque psoriasis in Spain. Following the EVIDEM (Evidence and Value: Impact on DEcision Making) methodology, the estimated value of both interventions was obtained by means of an additive linear model that combined the individual weighting (between 1 and 5) of each criterion with the individual scoring of each intervention in each criterion. Dupilumab was evaluated against placebo, while secukinumab was evaluated against placebo, etanercept and ustekinumab. A retest was performed to assess the reproducibility of weights, scores and value estimates. The overall MCDA value estimate for dupilumab versus placebo was 0.51 ± 0.14. This value was higher than those obtained for secukinumab: 0.48 ± 0.15 versus placebo, 0.45 ± 0.15 versus etanercept and 0.39 ± 0.18 versus ustekinumab. The highest-value contribution was reported by the patients' group, followed by the clinical professionals and the decision makers. A fundamental element that explained the difference in the scoring between pathologies was the availability of therapeutic alternatives. The retest confirmed the consistency and replicability of the analysis. Under this methodology, and assuming similar economic costs per patient for both treatments, the results indicated that the overall value estimated of dupilumab for severe atopic dermatitis was similar to, or slightly higher than, that of secukinumab for moderate to severe plaque psoriasis.
NASA Astrophysics Data System (ADS)
Nasta, Paolo; Romano, Nunzio
2016-01-01
This study explores the feasibility of identifying the effective soil hydraulic parameterization of a layered soil profile by using a conventional unsteady drainage experiment leading to field capacity. The flux-based field capacity criterion is attained by subjecting the soil profile to a synthetic drainage process implemented numerically in the Soil-Water-Atmosphere-Plant (SWAP) model. The effective hydraulic parameterization is associated to either aggregated or equivalent parameters, the former being determined by the geometrical scaling theory while the latter is obtained through the inverse modeling approach. Outcomes from both these methods depend on information that is sometimes difficult to retrieve at local scale and rather challenging or virtually impossible at larger scales. The only knowledge of topsoil hydraulic properties, for example, as retrieved by a near-surface field campaign or a data assimilation technique, is often exploited as a proxy to determine effective soil hydraulic parameterization at the largest spatial scales. Comparisons of the effective soil hydraulic characterization provided by these three methods are conducted by discussing the implications for their use and accounting for the trade-offs between required input information and model output reliability. To better highlight the epistemic errors associated to the different effective soil hydraulic properties and to provide some more practical guidance, the layered soil profiles are then grouped by using the FAO textural classes. For the moderately heterogeneous soil profiles available, all three approaches guarantee a general good predictability of the actual field capacity values and provide adequate identification of the effective hydraulic parameters. Conversely, worse performances are encountered for the highly variable vertical heterogeneity, especially when resorting to the "topsoil-only" information. In general, the best performances are guaranteed by the equivalent parameters, which might be considered a reference for comparisons with other techniques. As might be expected, the information content of the soil hydraulic properties pertaining only to the uppermost soil horizon is rather inefficient and also not capable to map out the hydrologic behavior of the real vertical soil heterogeneity since the drainage process is significantly affected by profile layering in almost all cases.
2013-01-01
Background As fiscal constraints dominate health policy discussions across Canada and globally, priority-setting exercises are becoming more common to guide the difficult choices that must be made. In this context, it becomes highly desirable to have accurate estimates of the value of specific health care interventions. Economic evaluation is a well-accepted method to estimate the value of health care interventions. However, economic evaluation has significant limitations, which have lead to an increase in the use of Multi-Criteria Decision Analysis (MCDA). One key concern with MCDA is the availability of the information necessary for implementation. In the Fall 2011, the Canadian Physiotherapy Association embarked on a project aimed at providing a valuation of physiotherapy services that is both evidence-based and relevant to resource allocation decisions. The framework selected for this project was MCDA. We report on how we addressed the challenge of obtaining some of the information necessary for MCDA implementation. Methods MCDA criteria were selected and areas of physiotherapy practices were identified. The building up of the necessary information base was a three step process. First, there was a literature review for each practice area, on each criterion. The next step was to conduct interviews with experts in each of the practice areas to critique the results of the literature review and to fill in gaps where there was no or insufficient literature. Finally, the results of the individual interviews were validated by a national committee to ensure consistency across all practice areas and that a national level perspective is applied. Results Despite a lack of research evidence on many of the considerations relevant to the estimation of the value of physiotherapy services (the criteria), sufficient information was obtained to facilitate MCDA implementation at the local level. Conclusions The results of this research project serve two purposes: 1) a method to obtain information necessary to implement MCDA is described, and 2) the results in terms of information on the benefits provided by each of the twelve areas of physiotherapy practice can be used by decision-makers as a starting point in the implementation of MCDA at the local level. PMID:23688138
Constrained proper sampling of conformations of transition state ensemble of protein folding
Lin, Ming; Zhang, Jian; Lu, Hsiao-Mei; Chen, Rong; Liang, Jie
2011-01-01
Characterizing the conformations of protein in the transition state ensemble (TSE) is important for studying protein folding. A promising approach pioneered by Vendruscolo [Nature (London) 409, 641 (2001)] to study TSE is to generate conformations that satisfy all constraints imposed by the experimentally measured ϕ values that provide information about the native likeness of the transition states. Faísca [J. Chem. Phys. 129, 095108 (2008)] generated conformations of TSE based on the criterion that, starting from a TS conformation, the probabilities of folding and unfolding are about equal through Markov Chain Monte Carlo (MCMC) simulations. In this study, we use the technique of constrained sequential Monte Carlo method [Lin , J. Chem. Phys. 129, 094101 (2008); Zhang Proteins 66, 61 (2007)] to generate TSE conformations of acylphosphatase of 98 residues that satisfy the ϕ-value constraints, as well as the criterion that each conformation has a folding probability of 0.5 by Monte Carlo simulations. We adopt a two stage process and first generate 5000 contact maps satisfying the ϕ-value constraints. Each contact map is then used to generate 1000 properly weighted conformations. After clustering similar conformations, we obtain a set of properly weighted samples of 4185 candidate clusters. Representative conformation of each of these cluster is then selected and 50 runs of Markov chain Monte Carlo (MCMC) simulation are carried using a regrowth move set. We then select a subset of 1501 conformations that have equal probabilities to fold and to unfold as the set of TSE. These 1501 samples characterize well the distribution of transition state ensemble conformations of acylphosphatase. Compared with previous studies, our approach can access much wider conformational space and can objectively generate conformations that satisfy the ϕ-value constraints and the criterion of 0.5 folding probability without bias. In contrast to previous studies, our results show that transition state conformations are very diverse and are far from nativelike when measured in cartesian root-mean-square deviation (cRMSD): the average cRMSD between TSE conformations and the native structure is 9.4 Å for this short protein, instead of 6 Å reported in previous studies. In addition, we found that the average fraction of native contacts in the TSE is 0.37, with enrichment in native-like β-sheets and a shortage of long range contacts, suggesting such contacts form at a later stage of folding. We further calculate the first passage time of folding of TSE conformations through calculation of physical time associated with the regrowth moves in MCMC simulation through mapping such moves to a Markovian state model, whose transition time was obtained by Langevin dynamics simulations. Our results indicate that despite the large structural diversity of the TSE, they are characterized by similar folding time. Our approach is general and can be used to study TSE in other macromolecules. PMID:21341875
Soguero-Ruiz, Cristina; Hindberg, Kristian; Rojo-Alvarez, Jose Luis; Skrovseth, Stein Olav; Godtliebsen, Fred; Mortensen, Kim; Revhaug, Arthur; Lindsetmo, Rolv-Ole; Augestad, Knut Magne; Jenssen, Robert
2016-09-01
The free text in electronic health records (EHRs) conveys a huge amount of clinical information about health state and patient history. Despite a rapidly growing literature on the use of machine learning techniques for extracting this information, little effort has been invested toward feature selection and the features' corresponding medical interpretation. In this study, we focus on the task of early detection of anastomosis leakage (AL), a severe complication after elective surgery for colorectal cancer (CRC) surgery, using free text extracted from EHRs. We use a bag-of-words model to investigate the potential for feature selection strategies. The purpose is earlier detection of AL and prediction of AL with data generated in the EHR before the actual complication occur. Due to the high dimensionality of the data, we derive feature selection strategies using the robust support vector machine linear maximum margin classifier, by investigating: 1) a simple statistical criterion (leave-one-out-based test); 2) an intensive-computation statistical criterion (Bootstrap resampling); and 3) an advanced statistical criterion (kernel entropy). Results reveal a discriminatory power for early detection of complications after CRC (sensitivity 100%; specificity 72%). These results can be used to develop prediction models, based on EHR data, that can support surgeons and patients in the preoperative decision making phase.
Comparing hierarchical models via the marginalized deviance information criterion.
Quintero, Adrian; Lesaffre, Emmanuel
2018-07-20
Hierarchical models are extensively used in pharmacokinetics and longitudinal studies. When the estimation is performed from a Bayesian approach, model comparison is often based on the deviance information criterion (DIC). In hierarchical models with latent variables, there are several versions of this statistic: the conditional DIC (cDIC) that incorporates the latent variables in the focus of the analysis and the marginalized DIC (mDIC) that integrates them out. Regardless of the asymptotic and coherency difficulties of cDIC, this alternative is usually used in Markov chain Monte Carlo (MCMC) methods for hierarchical models because of practical convenience. The mDIC criterion is more appropriate in most cases but requires integration of the likelihood, which is computationally demanding and not implemented in Bayesian software. Therefore, we consider a method to compute mDIC by generating replicate samples of the latent variables that need to be integrated out. This alternative can be easily conducted from the MCMC output of Bayesian packages and is widely applicable to hierarchical models in general. Additionally, we propose some approximations in order to reduce the computational complexity for large-sample situations. The method is illustrated with simulated data sets and 2 medical studies, evidencing that cDIC may be misleading whilst mDIC appears pertinent. Copyright © 2018 John Wiley & Sons, Ltd.
Fayyoumi, Ebaa; Oommen, B John
2009-10-01
We consider the microaggregation problem (MAP) that involves partitioning a set of individual records in a microdata file into a number of mutually exclusive and exhaustive groups. This problem, which seeks for the best partition of the microdata file, is known to be NP-hard and has been tackled using many heuristic solutions. In this paper, we present the first reported fixed-structure-stochastic-automata-based solution to this problem. The newly proposed method leads to a lower value of the information loss (IL), obtains a better tradeoff between the IL and the disclosure risk (DR) when compared with state-of-the-art methods, and leads to a superior value of the scoring index, which is a criterion involving a combination of the IL and the DR. The scheme has been implemented, tested, and evaluated for different real-life and simulated data sets. The results clearly demonstrate the applicability of learning automata to the MAP and its ability to yield a solution that obtains the best tradeoff between IL and DR when compared with the state of the art.
Profile and effects of consumer involvement in fresh meat.
Verbeke, Wim; Vackier, Isabelle
2004-05-01
This study investigates the profile and effects of consumer involvement in fresh meat as a product category based on cross-sectional data collected in Belgium. Analyses confirm that involvement in meat is a multidimensional construct including four facets: pleasure value, symbolic value, risk importance and risk probability. Four involvement-based meat consumer segments are identified: straightforward, cautious, indifferent, and concerned. Socio-demographic differences between the segments relate to gender, age and presence of children. The segments differ in terms of extensiveness of the decision-making process, impact and trust in information sources, levels of concern, price consciousness, claimed meat consumption, consumption intention, and preferred place of purchase. The two segments with a strong perception of meat risks constitute two-thirds of the market. They can be typified as cautious meat lovers versus concerned meat consumers. Efforts aiming at consumer reassurance through quality improvement, traceability, labelling or communication may gain effectiveness when targeted specifically to these two segments. Whereas straightforward meat lovers focus mainly on taste as the decisive criterion, indifferent consumers are strongly price oriented.
Dorfman, David M; LaPlante, Charlotte D; Pozdnyakova, Olga; Li, Betty
2015-11-01
In our high-sensitivity flow cytometric approach for systemic mastocytosis (SM), we identified mast cell event clustering as a new diagnostic criterion for the disease. To objectively characterize mast cell gated event distributions, we performed cluster analysis using FLOCK, a computational approach to identify cell subsets in multidimensional flow cytometry data in an unbiased, automated fashion. FLOCK identified discrete mast cell populations in most cases of SM (56/75 [75%]) but only a minority of non-SM cases (17/124 [14%]). FLOCK-identified mast cell populations accounted for 2.46% of total cells on average in SM cases and 0.09% of total cells on average in non-SM cases (P < .0001) and were predictive of SM, with a sensitivity of 75%, a specificity of 86%, a positive predictive value of 76%, and a negative predictive value of 85%. FLOCK analysis provides useful diagnostic information for evaluating patients with suspected SM, and may be useful for the analysis of other hematopoietic neoplasms. Copyright© by the American Society for Clinical Pathology.
NASA Astrophysics Data System (ADS)
Abunama, Taher; Othman, Faridah
2017-06-01
Analysing the fluctuations of wastewater inflow rates in sewage treatment plants (STPs) is essential to guarantee a sufficient treatment of wastewater before discharging it to the environment. The main objectives of this study are to statistically analyze and forecast the wastewater inflow rates into the Bandar Tun Razak STP in Kuala Lumpur, Malaysia. A time series analysis of three years’ weekly influent data (156weeks) has been conducted using the Auto-Regressive Integrated Moving Average (ARIMA) model. Various combinations of ARIMA orders (p, d, q) have been tried to select the most fitted model, which was utilized to forecast the wastewater inflow rates. The linear regression analysis was applied to testify the correlation between the observed and predicted influents. ARIMA (3, 1, 3) model was selected with the highest significance R-square and lowest normalized Bayesian Information Criterion (BIC) value, and accordingly the wastewater inflow rates were forecasted to additional 52weeks. The linear regression analysis between the observed and predicted values of the wastewater inflow rates showed a positive linear correlation with a coefficient of 0.831.
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Perceptual security of encrypted images based on wavelet scaling analysis
NASA Astrophysics Data System (ADS)
Vargas-Olmos, C.; Murguía, J. S.; Ramírez-Torres, M. T.; Mejía Carlos, M.; Rosu, H. C.; González-Aguilar, H.
2016-08-01
The scaling behavior of the pixel fluctuations of encrypted images is evaluated by using the detrended fluctuation analysis based on wavelets, a modern technique that has been successfully used recently for a wide range of natural phenomena and technological processes. As encryption algorithms, we use the Advanced Encryption System (AES) in RBT mode and two versions of a cryptosystem based on cellular automata, with the encryption process applied both fully and partially by selecting different bitplanes. In all cases, the results show that the encrypted images in which no understandable information can be visually appreciated and whose pixels look totally random present a persistent scaling behavior with the scaling exponent α close to 0.5, implying no correlation between pixels when the DFA with wavelets is applied. This suggests that the scaling exponents of the encrypted images can be used as a perceptual security criterion in the sense that when their values are close to 0.5 (the white noise value) the encrypted images are more secure also from the perceptual point of view.
Development of a Methodology for the Derivation of Aquatic Plant Water Quality Criteria
Aquatic plants form the base of most aquatic food chains, comprise biodiversity-building habitats and are functionally important in carbon assimilation and oxygen evolution. The USEPA, as stated in the Clean Water Act, establishes criterion values for various pollutants found in ...
NASA Astrophysics Data System (ADS)
Krysa, Zbigniew; Pactwa, Katarzyna; Wozniak, Justyna; Dudek, Michal
2017-12-01
Geological variability is one of the main factors that has an influence on the viability of mining investment projects and on the technical risk of geology projects. In the current scenario, analyses of economic viability of new extraction fields have been performed for the KGHM Polska Miedź S.A. underground copper mine at Fore Sudetic Monocline with the assumption of constant averaged content of useful elements. Research presented in this article is aimed at verifying the value of production from copper and silver ore for the same economic background with the use of variable cash flows resulting from the local variability of useful elements. Furthermore, the ore economic model is investigated for a significant difference in model value estimated with the use of linear correlation between useful elements content and the height of mine face, and the approach in which model parameters correlation is based upon the copula best matched information capacity criterion. The use of copula allows the simulation to take into account the multi variable dependencies at the same time, thereby giving a better reflection of the dependency structure, which linear correlation does not take into account. Calculation results of the economic model used for deposit value estimation indicate that the correlation between copper and silver estimated with the use of copula generates higher variation of possible project value, as compared to modelling correlation based upon linear correlation. Average deposit value remains unchanged.
Mungovan, Sean F; Peralta, Paula J; Gass, Gregory C; Scanlan, Aaron T
2018-04-12
To examine the test-retest reliability and criterion validity of a high-intensity, netball-specific fitness test. Repeated measures, within-subject design. Eighteen female netball players competing in an international competition completed a trial of the Net-Test, which consists of 14 timed netball-specific movements. Players also completed a series of netball-relevant criterion fitness tests. Ten players completed an additional Net-Test trial one week later to assess test-retest reliability using intraclass correlation coefficient (ICC), typical error of measurement (TEM), and coefficient of variation (CV). The typical error of estimate expressed as CV and Pearson correlations were calculated between each criterion test and Net-Test performance to assess criterion validity. Five movements during the Net-Test displayed moderate ICC (0.84-0.90) and two movements displayed high ICC (0.91-0.93). Seven movements and heart rate taken during the Net-Test held low CV (<5%) with values ranging from 1.7 to 9.5% across measures. Total time (41.63±2.05s) during the Net-Test possessed low CV and significant (p<0.05) correlations with 10m sprint time (1.98±0.12s; CV=4.4%, r=0.72), 20m sprint time (3.38±0.19s; CV=3.9%, r=0.79), 505 Change-of-Direction time (2.47±0.08s; CV=2.0%, r=0.80); and maximum oxygen uptake (46.59±2.58 mLkg -1 min -1 ; CV=4.5%, r=-0.66). The Net-Test possesses acceptable reliability for the assessment of netball fitness. Further, the high criterion validity for the Net-Test suggests a range of important netball-specific fitness elements are assessed in combination. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Mayorga-Vega, Daniel; Bocanegra-Parrilla, Raúl; Ornelas, Martha; Viciana, Jesús
2016-01-01
Objectives The main purpose of the present meta-analysis was to examine the criterion-related validity of the distance- and time-based walk/run tests for estimating cardiorespiratory fitness among apparently healthy children and adults. Materials and Methods Relevant studies were searched from seven electronic bibliographic databases up to August 2015 and through other sources. The Hunter-Schmidt’s psychometric meta-analysis approach was conducted to estimate the population criterion-related validity of the following walk/run tests: 5,000 m, 3 miles, 2 miles, 3,000 m, 1.5 miles, 1 mile, 1,000 m, ½ mile, 600 m, 600 yd, ¼ mile, 15 min, 12 min, 9 min, and 6 min. Results From the 123 included studies, a total of 200 correlation values were analyzed. The overall results showed that the criterion-related validity of the walk/run tests for estimating maximum oxygen uptake ranged from low to moderate (rp = 0.42–0.79), with the 1.5 mile (rp = 0.79, 0.73–0.85) and 12 min walk/run tests (rp = 0.78, 0.72–0.83) having the higher criterion-related validity for distance- and time-based field tests, respectively. The present meta-analysis also showed that sex, age and maximum oxygen uptake level do not seem to affect the criterion-related validity of the walk/run tests. Conclusions When the evaluation of an individual’s maximum oxygen uptake attained during a laboratory test is not feasible, the 1.5 mile and 12 min walk/run tests represent useful alternatives for estimating cardiorespiratory fitness. As in the assessment with any physical fitness field test, evaluators must be aware that the performance score of the walk/run field tests is simply an estimation and not a direct measure of cardiorespiratory fitness. PMID:26987118
NASA Astrophysics Data System (ADS)
Yurchenko, I.; Karakotin, I.; Kudinov, A.
2011-05-01
Minimization of head fairing heat protection shield weight during spacecraft injecting in atmosphere dense layers is a complicated task. The identification of heat transfer coefficient on heat protection shield surface during injection can be considered as a primary task to be solved with certain accuracy in order to minimize heat shield weight as well as meet reliability requirements. The height of the roughness around sound point on the head fairing spherical nose tip has a great influence on the heat transfer coefficient calculation. As it has found out during flight tests the height of the roughness makes possible to create boundary layer transition criterion on the head fairing in flight. Therefore the second task is an assessment how height of the roughness influences on the total incoming heat flux to the head fairing. And finally the third task is associated with correct implementation of the first task results, as there are changing boundary conditions during a flight such as bubbles within heat shield surface paint and thermal protection ablation for instance. In the article we have considered results of flight tests carried out using launch vehicles which allowed us to measure heat fluxes in flight and to estimate dispersions of heat transfer coefficient. The experimental-analytical procedure of defining heat fluxes on the LV head fairings has been presented. The procedure includes: - calculation of general-purpose dimensionless heat transfer coefficient - Nusselt number Nueff - based on the proposed effective temperature Teff method. The method allows calculate the Nusselt number values for cylindrical surfaces as well as dispersions of heat transfer coefficient; - universal criterion of turbulent-laminar transition for blunted head fairings - Reynolds number Reek = [ρеUеk/μе]TR = const , which gives the best correlation of all dates of flight experiment carried out per Reda procedure to define turbulent-laminar transition in boundary layer. The criterion allows defining time margins when turbulent flux on space head surfaces exists. It was defined that in conditions when high background disturbances of free stream flux while main LV engines operating join with integrated roughness influence the critical value of Reynolds number is an order-diminished value compared to values obtained in wind tunnels and in free flight. Influence of minimization of height of surface roughness near sound point on head fairing nose has been estimated. It has been found that the criterion of turbulent-laminar transition for smooth head fairings elements - Reynolds number - reaches the limit value which is equal to 200. This value is obtained from momentum thickness Reynolds number when roughness height is close to zero. So the turbulent- laminar flux transition occurs earlier with decreased duration of effect of high turbulent heat fluxes to the heat shield. This will allow decreasing head shield thickness up to 30%
Turkish Version of Kolcaba's Immobilization Comfort Questionnaire: A Validity and Reliability Study.
Tosun, Betül; Aslan, Özlem; Tunay, Servet; Akyüz, Aygül; Özkan, Hüseyin; Bek, Doğan; Açıksöz, Semra
2015-12-01
The purpose of this study was to determine the validity and reliability of the Turkish version of the Immobilization Comfort Questionnaire (ICQ). The sample used in this methodological study consisted of 121 patients undergoing lower extremity arthroscopy in a training and research hospital. The validity study of the questionnaire assessed language validity, structural validity and criterion validity. Structural validity was evaluated via exploratory factor analysis. Criterion validity was evaluated by assessing the correlation between the visual analog scale (VAS) scores (i.e., the comfort and pain VAS scores) and the ICQ scores using Spearman's correlation test. The Kaiser-Meyer-Olkin coefficient and Bartlett's test of sphericity were used to determine the suitability of the data for factor analysis. Internal consistency was evaluated to determine reliability. The data were analyzed with SPSS version 15.00 for Windows. Descriptive statistics were presented as frequencies, percentages, means and standard deviations. A p value ≤ .05 was considered statistically significant. A moderate positive correlation was found between the ICQ scores and the VAS comfort scores; a moderate negative correlation was found between the ICQ and the VAS pain measures in the criterion validity analysis. Cronbach α values of .75 and .82 were found for the first and second measurements, respectively. The findings of this study reveal that the ICQ is a valid and reliable tool for assessing the comfort of patients in Turkey who are immobilized because of lower extremity orthopedic problems. Copyright © 2015. Published by Elsevier B.V.
Decohesion models informed by first-principles calculations: The ab initio tensile test
NASA Astrophysics Data System (ADS)
Enrique, Raúl A.; Van der Ven, Anton
2017-10-01
Extreme deformation and homogeneous fracture can be readily studied via ab initio methods by subjecting crystals to numerical "tensile tests", where the energy of locally stable crystal configurations corresponding to elongated and fractured states are evaluated by means of density functional method calculations. The information obtained can then be used to construct traction curves of cohesive zone models in order to address fracture at the macroscopic scale. In this work, we perform an in depth analysis of traction curves and how ab initio calculations must be interpreted to rigorously parameterize an atomic scale cohesive zone model, using crystalline Ag as an example. Our analysis of traction curves reveal the existence of two qualitatively distinct decohesion criteria: (i) an energy criterion whereby the released elastic energy equals the energy cost of creating two new surfaces and (ii) an instability criterion that occurs at a higher and size independent stress than that of the energy criterion. We find that increasing the size of the simulation cell renders parts of the traction curve inaccessible to ab initio calculations involving the uniform decohesion of the crystal. We also find that the separation distance below which a crack heals is not a material parameter as has been proposed in the past. Finally, we show that a large energy barrier separates the uniformly stressed crystal from the decohered crystal, resolving a paradox predicted by a scaling law based on the energy criterion that implies that large crystals will decohere under vanishingly small stresses. This work clarifies confusion in the literature as to how a cohesive zone model is to be parameterized with ab initio "tensile tests" in the presence of internal relaxations.
Lecloux, André J; Atluri, Rambabu; Kolen'ko, Yury V; Deepak, Francis Leonard
2017-10-12
The first part of this study was dedicated to the modelling of the influence of particle shape, porosity and particle size distribution on the volume specific surface area (VSSA) values in order to check the applicability of this concept to the identification of nanomaterials according to the European Commission Recommendation. In this second part, experimental VSSA values are obtained for various samples from nitrogen adsorption isotherms and these values were used as a screening tool to identify and classify nanomaterials. These identification results are compared to the identification based on the 50% of particles with a size below 100 nm criterion applied to the experimental particle size distributions obtained by analysis of electron microscopy images on the same materials. It is concluded that the experimental VSSA values are able to identify nanomaterials, without false negative identification, if they have a mono-modal particle size, if the adsorption data cover the relative pressure range from 0.001 to 0.65 and if a simple, qualitative image of the particles by transmission or scanning electron microscopy is available to define their shape. The experimental conditions to obtain reliable adsorption data as well as the way to analyze the adsorption isotherms are described and discussed in some detail in order to help the reader in using the experimental VSSA criterion. To obtain the experimental VSSA values, the BET surface area can be used for non-porous particles, but for porous, nanostructured or coated nanoparticles, only the external surface of the particles, obtained by a modified t-plot approach, should be considered to determine the experimental VSSA and to avoid false positive identification of nanomaterials, only the external surface area being related to the particle size. Finally, the availability of experimental VSSA values together with particle size distributions obtained by electron microscopy gave the opportunity to check the representativeness of the two models described in the first part of this study. They were also used to calculate the VSSA values and these calculated values were compared to the experimental results. For narrow particle size distributions, both models give similar VSSA values quite comparable to the experimental ones. But when the particle size distribution broadens or is of multi-bimodal shape, as theoretically predicted, one model leads to VSSA values higher than the experimental ones while the other most often leads to VSSA values lower than the experimental ones. The experimental VSSA approach then appears as a reliable, simple screening tool to identify nano and non-nano-materials. The modelling approach cannot be used as a formal identification tool but could be useful to screen for potential effects of shape, polydispersity and size, for example to compare various possible nanoforms.
Paul V. Ellefson; Calder M. Hibbard; Michael A. Kilgore; James E. Granskog
2005-01-01
This review looks at the Nationâs legal, institutional, and economic capacity to promote forest conservation and sustainable resource management. It focuses on 20 indicators of Criterion Seven of the so-called Montreal Process and involves an extensive search and synthesis of information from a variety of sources. It identifies ways to fill information gaps and improve...
NASA Technical Reports Server (NTRS)
Panontin, Tina L.; Sheppard, Sheri D.
1994-01-01
The use of small laboratory specimens to predict the integrity of large, complex structures relies on the validity of single parameter fracture mechanics. Unfortunately, the constraint loss associated with large scale yielding, whether in a laboratory specimen because of its small size or in a structure because it contains shallow flaws loaded in tension, can cause the breakdown of classical fracture mechanics and the loss of transferability of critical, global fracture parameters. Although the issue of constraint loss can be eliminated by testing actual structural configurations, such an approach can be prohibitively costly. Hence, a methodology that can correct global fracture parameters for constraint effects is desirable. This research uses micromechanical analyses to define the relationship between global, ductile fracture initiation parameters and constraint in two specimen geometries (SECT and SECB with varying a/w ratios) and one structural geometry (circumferentially cracked pipe). Two local fracture criteria corresponding to ductile fracture micromechanisms are evaluated: a constraint-modified, critical strain criterion for void coalescence proposed by Hancock and Cowling and a critical void ratio criterion for void growth based on the Rice and Tracey model. Crack initiation is assumed to occur when the critical value in each case is reached over some critical length. The primary material of interest is A516-70, a high-hardening pressure vessel steel sensitive to constraint; however, a low-hardening structural steel that is less sensitive to constraint is also being studied. Critical values of local fracture parameters are obtained by numerical analysis and experimental testing of circumferentially notched tensile specimens of varying constraint (e.g., notch radius). These parameters are then used in conjunction with large strain, large deformation, two- and three-dimensional finite element analyses of the geometries listed above to predict crack initiation loads and to calculate the associated (critical) global fracture parameters. The loads are verified experimentally, and microscopy is used to measure pre-crack length, crack tip opening displacement (CTOD), and the amount of stable crack growth. Results for A516-70 steel indicate that the constraint-modified, critical strain criterion with a critical length approximately equal to the grain size (0.0025 inch) provides accurate predictions of crack initiation. The critical void growth criterion is shown to considerably underpredict crack initiation loads with the same critical length. The relationship between the critical value of the J-integral for ductile crack initiation and crack depth for SECT and SECB specimens has been determined using the constraint-modified, critical strain criterion, demonstrating that this micromechanical model can be used to correct in-plane constraint effects due to crack depth and bending vs. tension loading. Finally, the relationship developed for the SECT specimens is used to predict the behavior of circumferentially cracked pipe specimens.
Criterion-Focused Approach to Reducing Adverse Impact in College Admissions
ERIC Educational Resources Information Center
Sinha, Ruchi; Oswald, Frederick; Imus, Anna; Schmitt, Neal
2011-01-01
The current study examines how using a multidimensional battery of predictors (high-school grade point average (GPA), SAT/ACT, and biodata), and weighting the predictors based on the different values institutions place on various student performance dimensions (college GPA, organizational citizenship behaviors (OCBs), and behaviorally anchored…
Introducing the Accounting Equation with M&M's®
ERIC Educational Resources Information Center
Scofield, Barbara W.; Dye, Wilma
2009-01-01
On the first day of Principles of Accounting classes, students learn the fundamental accounting equation from which all financial accounting practice emerge. The accounting equation is the criterion by which companies are valued and by which company performance is measured. This activity simplifies assets, liabilities, and owners' equity to the…
On Measuring Quantitative Interpretations of Reasonable Doubt
ERIC Educational Resources Information Center
Dhami, Mandeep K.
2008-01-01
Beyond reasonable doubt represents a probability value that acts as the criterion for conviction in criminal trials. I introduce the membership function (MF) method as a new tool for measuring quantitative interpretations of reasonable doubt. Experiment 1 demonstrated that three different methods (i.e., direct rating, decision theory based, and…
[Occupational risk as a criterion determining economic responsibility of employers].
Subbotin, V V; Tkachev, V V
2003-01-01
The authors suggested a new method to calculate discounts and increments, value of assurance collection, that is based on differentiation of insurers, but not of economic branches. Occupational risk class should be set according to the previous results with consideration of work safety parameters described in the article.
Criterion 8: Urban and community forests
Stephen R. Shifley; Francisco X. Aguilar; Nianfu Song; Susan I. Stewart; David J. Nowak; Dale D. Gormanson; W. Keith Moser; Sherri Wormstead; Eric J. Greenfield
2012-01-01
Urban and community forests are the trees and forests found in cities, towns, villages, and communities. This category of forest includes both forested stands and trees along streets, in residential lots, and parks. These trees within cities and communities provide many ecosystem services and values to both urban and rural populations.
Derivative Free Gradient Projection Algorithms for Rotation
ERIC Educational Resources Information Center
Jennrich, Robert I.
2004-01-01
A simple modification substantially simplifies the use of the gradient projection (GP) rotation algorithms of Jennrich (2001, 2002). These algorithms require subroutines to compute the value and gradient of any specific rotation criterion of interest. The gradient can be difficult to derive and program. It is shown that using numerical gradients…
We evaluated several lines of evidence to identify bedded fine sediment levels that should protect and maintain self-sustaining populations of native sediment-sensitive aquatic species in the western US. To identify these potential criterion values for streambed sediments ≤0.06 ...
A CRITERION PAPER ON PARAMETERS OF EDUCATION. FINAL REVISION.
ERIC Educational Resources Information Center
MEIERHENRY, W. C.
THIS POSITION PAPER DEFINES ASPECTS OF INNOVATION IN EDUCATION. THE APPROPRIATENESS OF PLANNED CHANGE AND THE LEGITIMACY OF FUNCTION OF PLANNED CHANGE ARE DISCUSSED. PRIMARY ELEMENTS OF INNOVATION INCLUDE THE SUBSTITUTION OF ONE MATERIAL OR PROCESS FOR ANOTHER, THE RESTRUCTURING OF TEACHER ASSIGNMENTS, VALUE CHANGES WITH RESPECT TO TEACHING…
Andrews, Arthur R.; Bridges, Ana J.; Gomez, Debbie
2014-01-01
Purpose The aims of the study were to evaluate the orthogonality of acculturation for Latinos. Design Regression analyses were used to examine acculturation in two Latino samples (N = 77; N = 40). In a third study (N = 673), confirmatory factor analyses compared unidimensional and bidimensional models. Method Acculturation was assessed with the ARSMA-II (Studies 1 and 2), and language proficiency items from the Children of Immigrants Longitudinal Study (Study 3). Results In Studies 1 and 2, the bidimensional model accounted for slightly more variance (R2Study 1 = .11; R2Study 2 = .21) than the unidimensional model (R2Study 1 = .10; R2Study 2 = .19). In Study 3, the bidimensional model evidenced better fit (Akaike information criterion = 167.36) than the unidimensional model (Akaike information criterion = 1204.92). Discussion/Conclusions Acculturation is multidimensional. Implications for Practice Care providers should examine acculturation as a bidimensional construct. PMID:23361579
NASA Astrophysics Data System (ADS)
Liu, Sijia; Sa, Ruhan; Maguire, Orla; Minderman, Hans; Chaudhary, Vipin
2015-03-01
Cytogenetic abnormalities are important diagnostic and prognostic criteria for acute myeloid leukemia (AML). A flow cytometry-based imaging approach for FISH in suspension (FISH-IS) was established that enables the automated analysis of several log-magnitude higher number of cells compared to the microscopy-based approaches. The rotational positioning can occur leading to discordance between spot count. As a solution of counting error from overlapping spots, in this study, a Gaussian Mixture Model based classification method is proposed. The Akaike information criterion (AIC) and Bayesian information criterion (BIC) of GMM are used as global image features of this classification method. Via Random Forest classifier, the result shows that the proposed method is able to detect closely overlapping spots which cannot be separated by existing image segmentation based spot detection methods. The experiment results show that by the proposed method we can obtain a significant improvement in spot counting accuracy.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jesus, J.F.; Valentim, R.; Andrade-Oliveira, F., E-mail: jfjesus@itapeva.unesp.br, E-mail: valentim.rodolfo@unifesp.br, E-mail: felipe.oliveira@port.ac.uk
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γmore » = 3α H {sub 0} model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.« less
Bayesian analysis of CCDM models
NASA Astrophysics Data System (ADS)
Jesus, J. F.; Valentim, R.; Andrade-Oliveira, F.
2017-09-01
Creation of Cold Dark Matter (CCDM), in the context of Einstein Field Equations, produces a negative pressure term which can be used to explain the accelerated expansion of the Universe. In this work we tested six different spatially flat models for matter creation using statistical criteria, in light of SNe Ia data: Akaike Information Criterion (AIC), Bayesian Information Criterion (BIC) and Bayesian Evidence (BE). These criteria allow to compare models considering goodness of fit and number of free parameters, penalizing excess of complexity. We find that JO model is slightly favoured over LJO/ΛCDM model, however, neither of these, nor Γ = 3αH0 model can be discarded from the current analysis. Three other scenarios are discarded either because poor fitting or because of the excess of free parameters. A method of increasing Bayesian evidence through reparameterization in order to reducing parameter degeneracy is also developed.
Spectra of empirical autocorrelation matrices: A random-matrix-theory-inspired perspective
NASA Astrophysics Data System (ADS)
Jamali, Tayeb; Jafari, G. R.
2015-07-01
We construct an autocorrelation matrix of a time series and analyze it based on the random-matrix theory (RMT) approach. The autocorrelation matrix is capable of extracting information which is not easily accessible by the direct analysis of the autocorrelation function. In order to provide a precise conclusion based on the information extracted from the autocorrelation matrix, the results must be first evaluated. In other words they need to be compared with some sort of criterion to provide a basis for the most suitable and applicable conclusions. In the context of the present study, the criterion is selected to be the well-known fractional Gaussian noise (fGn). We illustrate the applicability of our method in the context of stock markets. For the former, despite the non-Gaussianity in returns of the stock markets, a remarkable agreement with the fGn is achieved.
Hierarchical semi-numeric method for pairwise fuzzy group decision making.
Marimin, M; Umano, M; Hatono, I; Tamura, H
2002-01-01
Gradual improvements to a single-level semi-numeric method, i.e., linguistic labels preference representation by fuzzy sets computation for pairwise fuzzy group decision making are summarized. The method is extended to solve multiple criteria hierarchical structure pairwise fuzzy group decision-making problems. The problems are hierarchically structured into focus, criteria, and alternatives. Decision makers express their evaluations of criteria and alternatives based on each criterion by using linguistic labels. The labels are converted into and processed in triangular fuzzy numbers (TFNs). Evaluations of criteria yield relative criteria weights. Evaluations of the alternatives, based on each criterion, yield a degree of preference for each alternative or a degree of satisfaction for each preference value. By using a neat ordered weighted average (OWA) or a fuzzy weighted average operator, solutions obtained based on each criterion are aggregated into final solutions. The hierarchical semi-numeric method is suitable for solving a larger and more complex pairwise fuzzy group decision-making problem. The proposed method has been verified and applied to solve some real cases and is compared to Saaty's (1996) analytic hierarchy process (AHP) method.
NASA Astrophysics Data System (ADS)
Lin, Yi-Kuei; Yeh, Cheng-Ta
2013-05-01
From the perspective of supply chain management, the selected carrier plays an important role in freight delivery. This article proposes a new criterion of multi-commodity reliability and optimises the carrier selection based on such a criterion for logistics networks with routes and nodes, over which multiple commodities are delivered. Carrier selection concerns the selection of exactly one carrier to deliver freight on each route. The capacity of each carrier has several available values associated with a probability distribution, since some of a carrier's capacity may be reserved for various orders. Therefore, the logistics network, given any carrier selection, is a multi-commodity multi-state logistics network. Multi-commodity reliability is defined as a probability that the logistics network can satisfy a customer's demand for various commodities, and is a performance indicator for freight delivery. To solve this problem, this study proposes an optimisation algorithm that integrates genetic algorithm, minimal paths and Recursive Sum of Disjoint Products. A practical example in which multi-sized LCD monitors are delivered from China to Germany is considered to illustrate the solution procedure.
De Cocker, K; Cardon, G; De Bourdeaudhuij, I
2006-01-01
Objectives To evaluate if inexpensive Stepping Meters are valid in counting steps in adults in free living conditions. Methods For six days, 35 healthy volunteers wore a criterion Yamax Digiwalker and five Stepping Meters every day until all 973 pedometers had been tested. Steps were recorded daily, and the differences between counts from the Digiwalker and the Stepping Meter were expressed as a percentage of the valid value of the Digiwalker step counts. The criterion used to determine if a Stepping Meter was valid was a maximum deviation of 10% from the Digiwalker step counts. Results A total of 252 (25.9%) Stepping Meters met the criterion, whereas 74.1% made an overestimation or underestimation of more than 10%. In more than one third (36.6%) of the invalid Stepping Meters, the deviation was greater than 50%. Most (64.8%) of the invalid pedometers overestimated the actual steps taken. Conclusions Inexpensive Stepping Meters cannot be used in community interventions as they will give participants the wrong message. PMID:16790485
Thermal or nonthermal? That is the question for ultrafast spin switching in GdFeCo.
Zhang, G P; George, Thomas F
2013-09-11
GdFeCo is among the most interesting magnets for producing laser-induced femtosecond magnetism, where light can switch its spin moment from one direction to another. This paper aims to set a criterion for the thermal/nonthermal mechanism: we propose to use the Fermi-Dirac distribution function as a reliable criterion. A precise value for the thermalization time is needed, and through a two-level model, we show that since there is no direct connection between the laser helicity and the definition of thermal/nonthermal processes, the helicity is a poor criterion for differentiating a thermal from a nonthermal process. In addition, we propose a four-site model system (Gd2Fe2) for investigating the transient ferromagnetic ordering between Gd and Fe ions. We find that states of two different kinds can allow such an ordering. One state is a pure ferromagnetic state with ferromagnetic ordering among all the ions, and the other is the short-ranged ferromagnetic ordering of a pair of Gd and Fe ions.
Shuhaimi-Othman, M.; Nadzifah, Y.; Nur-Amalina, R.; Umirah, N. S.
2012-01-01
Freshwater quality criteria for iron (Fe), lead (Pb), nickel (Ni), and zinc (Zn) were developed with particular reference to aquatic biota in Malaysia, and based on USEPA's guidelines. Acute toxicity tests were performed on eight different freshwater domestic species in Malaysia which were Macrobrachium lanchesteri (prawn), two fish: Poecilia reticulata and Rasbora sumatrana, Melanoides tuberculata (snail), Stenocypris major (ostracod), Chironomus javanus (midge larvae), Nais elinguis (annelid), and Duttaphrynus melanostictus (tadpole) to determine 96 h LC50 values for Fe, Pb, Ni, and Zn. The final acute value (FAV) for Fe, Pb, Ni, and Zn were 74.5, 17.0, 165, and 304.9 μg L−1, respectively. Using an estimated acute-to-chronic ratio (ACR) of 8.3, the value for final chronic value (FCV) was derived. Based on FAV and FCV, a criterion maximum concentration (CMC) and a criterion continuous concentration (CCC) for Fe, Pb, Ni, and Zn that are 37.2, 8.5, 82.5, and 152.4 μg L−1 and 9.0, 2.0, 19.9, and 36.7 μg L−1, respectively, were derived. The results of this study provide useful data for deriving national or local water quality criteria for Fe, Pb, Ni, and Zn based on aquatic biota in Malaysia. Based on LC50 values, this study indicated that N. elinguis, M. lanchesteri, N. elinguis, and R. sumatrana were the most sensitive to Fe, Pb, Ni, and Zn, respectively. PMID:22919358
Bus, Sicco A.; Haspels, Rob; Busch-Westbroek, Tessa E.
2011-01-01
OBJECTIVE Therapeutic footwear for diabetic foot patients aims to reduce the risk of ulceration by relieving mechanical pressure on the foot. However, footwear efficacy is generally not assessed in clinical practice. The purpose of this study was to assess the value of in-shoe plantar pressure analysis to evaluate and optimize the pressure-reducing effects of diabetic therapeutic footwear. RESEARCH DESIGN AND METHODS Dynamic in-shoe plantar pressure distribution was measured in 23 neuropathic diabetic foot patients wearing fully customized footwear. Regions of interest (with peak pressure >200 kPa) were selected and targeted for pressure optimization by modifying the shoe or insole. After each of a maximum of three rounds of modifications, the effect on in-shoe plantar pressure was measured. Successful optimization was achieved with a peak pressure reduction of >25% (criterion A) or below an absolute level of 200 kPa (criterion B). RESULTS In 35 defined regions, mean peak pressure was significantly reduced from 303 (SD 77) to 208 (46) kPa after an average 1.6 rounds of footwear modifications (P < 0.001). This result constitutes a 30.2% pressure relief (range 18–50% across regions). All regions were successfully optimized: 16 according to criterion A, 7 to criterion B, and 12 to criterion A and B. Footwear optimization lasted on average 53 min. CONCLUSIONS These findings suggest that in-shoe plantar pressure analysis is an effective and efficient tool to evaluate and guide footwear modifications that significantly reduce pressure in the neuropathic diabetic foot. This result provides an objective approach to instantly improve footwear quality, which should reduce the risk for pressure-related plantar foot ulcers. PMID:21610125
Forecasting approaches to the Mekong River
NASA Astrophysics Data System (ADS)
Plate, E. J.
2009-04-01
Hydrologists distinguish between flood forecasts, which are concerned with events of the immediate future, and flood predictions, which are concerned with events that are possible, but whose date of occurrence is not determined. Although in principle both involve the determination of runoff from rainfall, the analytical approaches differ because of different objectives. The differences between the two approaches will be discussed, starting with an analysis of the forecasting process. The Mekong River in south-east Asia is used as an example. Prediction is defined as forecast for a hypothetical event, such as the 100-year flood, which is usually sufficiently specified by its magnitude and its probability of occurrence. It forms the basis for designing flood protection structures and risk management activities. The method for determining these quantities is hydrological modeling combined with extreme value statistics, today usually applied both to rainfall events and to observed river discharges. A rainfall-runoff model converts extreme rainfall events into extreme discharges, which at certain gage points along a river are calibrated against observed discharges. The quality of the model output is assessed against the mean value by means of the Nash-Sutcliffe quality criterion. The result of this procedure is a design hydrograph (or a family of design hydrographs) which are used as inputs into a hydraulic model, which converts the hydrograph into design water levels according to the hydraulic situation of the location. The accuracy of making a prediction in this sense is not particularly high: hydrologists know that the 100-year flood is a statistical quantity which can be estimated only within comparatively wide error bounds, and the hydraulics of a river site, in particular under conditions of heavy sediment loads has many uncertainties. Safety margins, such as additional freeboards are arranged to compensate for the uncertainty of the prediction. Forecasts, on the other hand, have as objective to obtain an accurate hydrograph of the near future. The method by means of which this is done is not as important as the accuracy of the forecast. A mathematical rainfall-runoff model is not necessarily a good forecast model. It has to be very carefully designed, and in many cases statistical models are found to give better results than mathematical models. Forecasters have the advantage of knowing the course of the hydrographs up to the point in time where forecasts have to be made. Therefore, models can be calibrated on line against the hydrograph of the immediate past. To assess the quality of a forecast, the quality criterion should not be based on the mean value, as does the Nash-Sutcliffe criterion, but should be based on the best forecast given the information up to the forecast time. Without any additional information, the best forecast when only the present day value is known is to assume a no-change scenario, i.e. to assume that the present value does not change in the immediate future. For the Mekong there exists a forecasting system which is based on a rainfall-runoff model operated by the Mekong River Commission. This model is found not to be adequate for forecasting for periods longer than one or two days ahead. Improvements are sought through two approaches: a strictly deterministic rainfall-runoff model, and a strictly statistical model based on regression with upstream stations. The two approaches are com-pared, and suggestions are made how to best combine the advantages of both approaches. This requires that due consideration is given to critical hydraulic conditions of the river at and in between the gauging stations. Critical situations occur in two ways: when the river overtops, in which case the rainfall-runoff model is incomplete unless overflow losses are considered, and at the confluence with tributaries. Of particular importance is the role of the large Tonle Sap Lake, which dampens the hydrograph downstream of Phnom Penh. The effect of these components of river hydraulics on forecasting accuracy will be assessed.
A damage mechanics based general purpose interface/contact element
NASA Astrophysics Data System (ADS)
Yan, Chengyong
Most of the microelectronics packaging structures consist of layered substrates connected with bonding materials, such as solder or epoxy. Predicting the thermomechanical behavior of these multilayered structures is a challenging task in electronic packaging engineering. In a layered structure the most complex part is always the interfaces between the strates. Simulating the thermo-mechanical behavior of such interfaces, is the main theme of this dissertation. The most commonly used solder material, Pb-Sn alloy, has a very low melting temperature 180sp°C, so that the material demonstrates a highly viscous behavior. And, creep usually dominates the failure mechanism. Hence, the theory of viscoplasticity is adapted to describe the constitutive behavior. In a multilayered assembly each layer has a different coefficient of thermal expansion. Under thermal cycling, due to heat dissipated from circuits, interfaces and interconnects experience low cycle fatigue. Presently, the state-of-the art damage mechanics model used for fatigue life predictions is based on Kachanov (1986) continuum damage model. This model uses plastic strain as a damage criterion. Since plastic strain is a stress path dependent value, the criterion does not yield unique damage values for the same state of stress. In this dissertation a new damage evolution equation based on the second law of thermodynamic is proposed. The new criterion is based on the entropy of the system and it yields unique damage values for all stress paths to the final state of stress. In the electronics industry, there is a strong desire to develop fatigue free interconnections. The proposed interface/contact element can also simulate the behavior of the fatigue free Z-direction thin film interconnections as well as traditional layered interconnects. The proposed interface element can simulate behavior of a bonded interface or unbonded sliding interface, also called contact element. The proposed element was verified against laboratory test data presented in the literature. The results demonstrate that the proposed element and the damage law perform very well. The most important scientific contribution of this dissertation is the proposed damage criterion based on second law of thermodynamic and entropy of the system. The proposed general purpose interface/contact element is another contribution of this research. Compared to the previous adhoc interface elements proposed in the literature, the new one is, much more powerful and includes creep, plastic deformations, sliding, temperature, damage, cyclic behavior and fatigue life in a unified formulation.
Hernandez, J E; Epstein, L D; Rodriguez, M H; Rodriguez, A D; Rejmankova, E; Roberts, D R
1997-03-01
We propose the use of generalized tree models (GTMs) to analyze data from entomological field studies. Generalized tree models can be used to characterize environments with different mosquito breeding capacity. A GTM simultaneously analyzes a set of predictor variables (e.g., vegetation coverage) in relation to a response variable (e.g., counts of Anopheles albimanus larvae), and how it varies with respect to a set of criterion variables (e.g., presence of predators). The algorithm produces a treelike graphical display with its root at the top and 2 branches stemming down from each node. At each node, conditions on the value of predictors partition the observations into subgroups (environments) in which the relation between response and criterion variables is most homogeneous.
Ganina, K P; Petunin, Iu I; Timoshenko, Ia G
1989-01-01
A method for quantitative analysis of epithelial cell nuclear polymorphism was suggested, viz. identification of general statistical population using Petunin's criterion. This criterion was employed to assess heterogeneity of visible surface of interphase epithelial cell nuclei and to assay nuclear DNA level in fibroadenomatous hyperplasia and cancer of the breast. Heterogeneity index (h), alongside with other parameters, appeared useful for quantitative assessment of the disease: heterogeneity index values ranging 0.1-0.4 point to pronounced heterogeneity of epithelial cell nucleus surface and DNA level, and are suggestive of malignant transformation of tissue, whereas benign proliferation of the epithelium is usually characterized by 0.4 less than h less than or equal to 0.9.
NASA Technical Reports Server (NTRS)
Tewari, Surendra N.; Trivedi, Rohit
1991-01-01
Development of steady-state periodic cellular array is one of the critical problems in the study of nonlinear pattern formation during directional solidification of binary alloys. The criterion which establishes the values of cell tip radius and spacing under given growth condition is not known. Theoretical models, such as marginal stability and microscopic solvability, have been developed for purely diffusive regime. However, the experimental conditions where cellular structures are stable are precisely the ones where the convection effects are predominant. Thus, the critical data for meaningful evaluation of cellular array growth models can only be obtained by partial directional solidification and quenching experiments carried out in the low gravity environment of space.
Optimization design of hydroturbine rotors according to the efficiency-strength criteria
NASA Astrophysics Data System (ADS)
Bannikov, D. V.; Yesipov, D. V.; Cherny, S. G.; Chirkov, D. V.
2010-12-01
The hydroturbine runner designing [1] is optimized by efficient methods for calculation of head loss in entire flow-through part of the turbine and deformation state of the blade. Energy losses are found at modelling of the spatial turbulent flow and engineering semi-empirical formulae. State of deformation is determined from the solution of the linear problem of elasticity for the isolated blade at hydrodynamic pressure with the method of boundary elements. With the use of the proposed system, the problem of the turbine runner design with the capacity of 640 MW providing the preset dependence of efficiency on the turbine work mode (efficiency criterion) is solved. The arising stresses do not exceed the critical value (strength criterion).
Volcano plots in analyzing differential expressions with mRNA microarrays.
Li, Wentian
2012-12-01
A volcano plot displays unstandardized signal (e.g. log-fold-change) against noise-adjusted/standardized signal (e.g. t-statistic or -log(10)(p-value) from the t-test). We review the basic and interactive use of the volcano plot and its crucial role in understanding the regularized t-statistic. The joint filtering gene selection criterion based on regularized statistics has a curved discriminant line in the volcano plot, as compared to the two perpendicular lines for the "double filtering" criterion. This review attempts to provide a unifying framework for discussions on alternative measures of differential expression, improved methods for estimating variance, and visual display of a microarray analysis result. We also discuss the possibility of applying volcano plots to other fields beyond microarray.
Buckling of Low Arches or Curved Beams of Small Curvature
NASA Technical Reports Server (NTRS)
Fung, Y C; Kaplan, A
1952-01-01
A general solution, based on the classical buckling criterion, is given for the problem of buckling of low arches under a lateral loading acting toward the center of curvature. For a sinusoidal arch under sinusoidal loading, the critical load can be expressed exactly as a simple function of the beam dimension parameters. For other arch shapes and load distributions, approximate values of the critical load can be obtained by summing a few terms of a rapidly converging Fourier series. The effects of initial end thrust and axial and lateral elastic support are discussed. The buckling load based on energy criterion of Karman and Tsien is also calculated. Results for both the classical and the energy criteria are compared with experimental results.
Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria
2003-01-01
OBJECTIVE: To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. METHODS: Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. FINDINGS: A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. CONCLUSION: Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic. PMID:12973640
Carabin, Hélène; Escalona, Marisela; Marshall, Clare; Vivas-Martínez, Sarai; Botto, Carlos; Joseph, Lawrence; Basáñez, María-Gloria
2003-01-01
To develop a Bayesian hierarchical model for human onchocerciasis with which to explore the factors that influence prevalence of microfilariae in the Amazonian focus of onchocerciasis and predict the probability of any community being at least mesoendemic (>20% prevalence of microfilariae), and thus in need of priority ivermectin treatment. Models were developed with data from 732 individuals aged > or =15 years who lived in 29 Yanomami communities along four rivers of the south Venezuelan Orinoco basin. The models' abilities to predict prevalences of microfilariae in communities were compared. The deviance information criterion, Bayesian P-values, and residual values were used to select the best model with an approximate cross-validation procedure. A three-level model that acknowledged clustering of infection within communities performed best, with host age and sex included at the individual level, a river-dependent altitude effect at the community level, and additional clustering of communities along rivers. This model correctly classified 25/29 (86%) villages with respect to their need for priority ivermectin treatment. Bayesian methods are a flexible and useful approach for public health research and control planning. Our model acknowledges the clustering of infection within communities, allows investigation of links between individual- or community-specific characteristics and infection, incorporates additional uncertainty due to missing covariate data, and informs policy decisions by predicting the probability that a new community is at least mesoendemic.
NASA Astrophysics Data System (ADS)
Dimitropoulou, M.; Isliker, H.; Vlahos, L.; Georgoulis, M.; Anastasiadis, A.; Toutountzi, A.
2013-09-01
We treat flaring solar active regions as physical systems having reached the self-organized critical state. Their evolving magnetic configurations in the low corona may satisfy an instability criterion, related to the excession of a specific threshold in the curl of the magnetic field. This imposed instability criterion implies an almost zero resistivity everywhere in the solar corona, except in regions where magnetic-field discontinuities and. hence, local currents, reach the critical value. In these areas, current-driven instabilities enhance the resistivity by many orders of magnitude forming structures which efficiently accelerate charged particles. Simulating the formation of such structures (thought of as current sheets) via a refined SOC cellular-automaton model provides interesting information regarding their statistical properties. It is shown that the current density in such unstable regions follows power-law scaling. Furthermore, the size distribution of the produced current sheets is best fitted by power laws, whereas their formation probability is investigated against the photospheric magnetic configuration (e.g. Polarity Inversion Lines, Plage). The average fractal dimension of the produced current sheets is deduced depending on the selected critical threshold. The above-mentioned statistical description of intermittent electric field structures can be used by collisional relativistic test particle simulations, aiming to interpret particle acceleration in flaring active regions and in strongly turbulent media in astrophysical plasmas. The above work is supported by the Hellenic National Space Weather Research Network (HNSWRN) via the THALIS Programme.
A Topological Criterion for Filtering Information in Complex Brain Networks
Latora, Vito; Chavez, Mario
2017-01-01
In many biological systems, the network of interactions between the elements can only be inferred from experimental measurements. In neuroscience, non-invasive imaging tools are extensively used to derive either structural or functional brain networks in-vivo. As a result of the inference process, we obtain a matrix of values corresponding to a fully connected and weighted network. To turn this into a useful sparse network, thresholding is typically adopted to cancel a percentage of the weakest connections. The structural properties of the resulting network depend on how much of the inferred connectivity is eventually retained. However, how to objectively fix this threshold is still an open issue. We introduce a criterion, the efficiency cost optimization (ECO), to select a threshold based on the optimization of the trade-off between the efficiency of a network and its wiring cost. We prove analytically and we confirm through numerical simulations that the connection density maximizing this trade-off emphasizes the intrinsic properties of a given network, while preserving its sparsity. Moreover, this density threshold can be determined a-priori, since the number of connections to filter only depends on the network size according to a power-law. We validate this result on several brain networks, from micro- to macro-scales, obtained with different imaging modalities. Finally, we test the potential of ECO in discriminating brain states with respect to alternative filtering methods. ECO advances our ability to analyze and compare biological networks, inferred from experimental data, in a fast and principled way. PMID:28076353
Optimization of multi-environment trials for genomic selection based on crop models.
Rincent, R; Kuhn, E; Monod, H; Oury, F-X; Rousset, M; Allard, V; Le Gouis, J
2017-08-01
We propose a statistical criterion to optimize multi-environment trials to predict genotype × environment interactions more efficiently, by combining crop growth models and genomic selection models. Genotype × environment interactions (GEI) are common in plant multi-environment trials (METs). In this context, models developed for genomic selection (GS) that refers to the use of genome-wide information for predicting breeding values of selection candidates need to be adapted. One promising way to increase prediction accuracy in various environments is to combine ecophysiological and genetic modelling thanks to crop growth models (CGM) incorporating genetic parameters. The efficiency of this approach relies on the quality of the parameter estimates, which depends on the environments composing this MET used for calibration. The objective of this study was to determine a method to optimize the set of environments composing the MET for estimating genetic parameters in this context. A criterion called OptiMET was defined to this aim, and was evaluated on simulated and real data, with the example of wheat phenology. The MET defined with OptiMET allowed estimating the genetic parameters with lower error, leading to higher QTL detection power and higher prediction accuracies. MET defined with OptiMET was on average more efficient than random MET composed of twice as many environments, in terms of quality of the parameter estimates. OptiMET is thus a valuable tool to determine optimal experimental conditions to best exploit MET and the phenotyping tools that are currently developed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
J. S. Schroeder; R. W. Youngblood
The Risk-Informed Safety Margin Characterization (RISMC) pathway of the Light Water Reactor Sustainability Program is developing simulation-based methods and tools for analyzing safety margin from a modern perspective. [1] There are multiple definitions of 'margin.' One class of definitions defines margin in terms of the distance between a point estimate of a given performance parameter (such as peak clad temperature), and a point-value acceptance criterion defined for that parameter (such as 2200 F). The present perspective on margin is that it relates to the probability of failure, and not just the distance between a nominal operating point and a criterion.more » In this work, margin is characterized through a probabilistic analysis of the 'loads' imposed on systems, structures, and components, and their 'capacity' to resist those loads without failing. Given the probabilistic load and capacity spectra, one can assess the probability that load exceeds capacity, leading to component failure. Within the project, we refer to a plot of these probabilistic spectra as 'the logo.' Refer to Figure 1 for a notional illustration. The implications of referring to 'the logo' are (1) RISMC is focused on being able to analyze loads and spectra probabilistically, and (2) calling it 'the logo' tacitly acknowledges that it is a highly simplified picture: meaningful analysis of a given component failure mode may require development of probabilistic spectra for multiple physical parameters, and in many practical cases, 'load' and 'capacity' will not vary independently.« less
Satisfying the Einstein–Podolsky–Rosen criterion with massive particles
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2015-01-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology. PMID:26612105
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles.
Peise, J; Kruse, I; Lange, K; Lücke, B; Pezzè, L; Arlt, J; Ertmer, W; Hammerer, K; Santos, L; Smerzi, A; Klempt, C
2015-11-27
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Satisfying the Einstein-Podolsky-Rosen criterion with massive particles
NASA Astrophysics Data System (ADS)
Peise, J.; Kruse, I.; Lange, K.; Lücke, B.; Pezzè, L.; Arlt, J.; Ertmer, W.; Hammerer, K.; Santos, L.; Smerzi, A.; Klempt, C.
2015-11-01
In 1935, Einstein, Podolsky and Rosen (EPR) questioned the completeness of quantum mechanics by devising a quantum state of two massive particles with maximally correlated space and momentum coordinates. The EPR criterion qualifies such continuous-variable entangled states, where a measurement of one subsystem seemingly allows for a prediction of the second subsystem beyond the Heisenberg uncertainty relation. Up to now, continuous-variable EPR correlations have only been created with photons, while the demonstration of such strongly correlated states with massive particles is still outstanding. Here we report on the creation of an EPR-correlated two-mode squeezed state in an ultracold atomic ensemble. The state shows an EPR entanglement parameter of 0.18(3), which is 2.4 s.d. below the threshold 1/4 of the EPR criterion. We also present a full tomographic reconstruction of the underlying many-particle quantum state. The state presents a resource for tests of quantum nonlocality and a wide variety of applications in the field of continuous-variable quantum information and metrology.
Classification VIA Information-Theoretic Fusion of Vector-Magnetic and Acoustic Sensor Data
2007-04-01
10) where tBsBtBsBtBsBtsB zzyyxx, . (11) The operation in (10) may be viewed as a vector matched- filter on to estimate )(tB CPARv . In summary...choosing to maximize the classification information in Y are described in Section 3.2. A 3.2. Maximum mutual information ( MMI ) features We begin with a...review of several desirable properties of features that maximize a mutual information ( MMI ) criterion. Then we review a particular algorithm [2
Gary D. Grossman; Robert E Ratajczak; J. Todd Petty; Mark D. Hunter; James T. Peterson; Gael Grenouillet
2006-01-01
We used strong inference with Akaike's Information Criterion (AIC) to assess the processes capable of explaining long-term (1984-1995) variation in the per capita rate of change of mottled sculpin (Cottus bairdi) populations in the Coweeta Creek drainage (USA). We sampled two fourth- and one fifth-order sites (BCA [uppermost], BCB, and CC [lowermost])...
Criterion Validity Evidence for the easyCBM© CCSS Math Measures: Grades 6-8. Technical Report #1402
ERIC Educational Resources Information Center
Anderson, Daniel; Rowley, Brock; Alonzo, Julie; Tindal, Gerald
2012-01-01
The easyCBM© CCSS Math tests were developed to help inform teachers' instructional decisions by providing relevant information on students' mathematical skills, relative to the Common Core State Standards (CCSS). This technical report describes a study to explore the validity of the easyCBM© CCSS Math tests by evaluating the relation between…
Delineating riparian zones for entire river networks using geomorphological criteria
NASA Astrophysics Data System (ADS)
Fernández, D.; Barquín, J.; Álvarez-Cabria, M.; Peñas, F. J.
2012-03-01
Riparian zone delineation is a central issue for riparian and river ecosystem management, however, criteria used to delineate them are still under debate. The area inundated by a 50-yr flood has been indicated as an optimal hydrological descriptor for riparian areas. This detailed hydrological information is, however, not usually available for entire river corridors, and is only available for populated areas at risk of flooding. One of the requirements for catchment planning is to establish the most appropriate location of zones to conserve or restore riparian buffer strips for whole river networks. This issue could be solved by using geomorphological criteria extracted from Digital Elevation Models. In this work we have explored the adjustment of surfaces developed under two different geomorphological criteria with respect to the flooded area covered by the 50-yr flood, in an attempt to rapidly delineate hydrologically-meaningful riparian zones for entire river networks. The first geomorphological criterion is based on the surface that intersects valley walls at a given number of bankfull depths above the channel (BFDAC), while the second is based on the surface defined by a~threshold value indicating the relative cost of moving from the stream up to the valley, accounting for slope and elevation change (path distance). As the relationship between local geomorphology and 50-yr flood has been suggested to be river-type dependant, we have performed our analyses distinguishing between three river types corresponding with three valley morphologies: open, shallow vee and deep vee valleys (in increasing degree of valley constrainment). Adjustment between the surfaces derived from geomorphological and hydrological criteria has been evaluated using two different methods: one based on exceeding areas (minimum exceeding score) and the other on the similarity among total area values. Both methods have pointed out the same surfaces when looking for those that best match with the 50-yr flood. Results have shown that the BFDAC approach obtains an adjustment slightly better than that of path distance. However, BFDAC requires bankfull depth regional regressions along the considered river network. Results have also confirmed that unconstrained valleys require lower threshold values than constrained valleys when deriving surfaces using geomorphological criteria. Moreover, this study provides: (i) guidance on the selection of the proper geomorphological criterion and associated threshold values, and (ii) an easy calibration framework to evaluate the adjustment with respect to hydrologically-meaningful surfaces.
A preliminary study on drought events in Peninsular Malaysia
NASA Astrophysics Data System (ADS)
Zin, Wan Zawiah Wan; Nahrawi, Siti Aishah; Jemain, Abdul Aziz; Zahari, Marina
2014-06-01
In this research, the Standard Precipitation Index (SPI) is used to represent the dry condition in Peninsular Malaysia. To do this, data of monthly rainfall from 75 stations in Peninsular Malaysia is used to obtain the SPI values at scale one. From the SPI values, two drought characteristics that are commonly used to represent the dry condition in an area that is the duration and severity of a drought period are identified and their respective values calculated for every station. Spatial mappings are then used to identify areas which are more likely to be affected by longer and more severe drought condition from the results. As the two drought characteristics may be correlated with each other, the joint distribution of severity and duration of dry condition is considered. Bivariate copula model is used and five copula models were tested, namely, the Gumbel-Hougard, Clayton, Frank, Joe and Galambos copulas. The copula model, which best represents the relationship between severity and duration, is determined using Akaike information criterion. The results showed that the Joe and Clayton copulas are well-fitted by close to 60% of the stations under study. Based on the results on the most appropriate copula-based joint distribution for each station, some bivariate probabilistic properties of droughts can then be calculated, which will be continued in future research.
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts. PMID:28979228
Merk, Josef; Schlotz, Wolff; Falter, Thomas
2017-01-01
This study presents a new measure of value systems, the Motivational Value Systems Questionnaire (MVSQ), which is based on a theory of value systems by psychologist Clare W. Graves. The purpose of the instrument is to help people identify their personal hierarchies of value systems and thus become more aware of what motivates and demotivates them in work-related contexts. The MVSQ is a forced-choice (FC) measure, making it quicker to complete and more difficult to intentionally distort, but also more difficult to assess its psychometric properties due to ipsativity of FC data compared to rating scales. To overcome limitations of ipsative data, a Thurstonian IRT (TIRT) model was fitted to the questionnaire data, based on a broad sample of N = 1,217 professionals and students. Comparison of normative (IRT) scale scores and ipsative scores suggested that MVSQ IRT scores are largely freed from restrictions due to ipsativity and thus allow interindividual comparison of scale scores. Empirical reliability was estimated using a sample-based simulation approach which showed acceptable and good estimates and, on average, slightly higher test-retest reliabilities. Further, validation studies provided evidence on both construct validity and criterion-related validity. Scale score correlations and associations of scores with both age and gender were largely in line with theoretically- and empirically-based expectations, and results of a multitrait-multimethod analysis supports convergent and discriminant construct validity. Criterion validity was assessed by examining the relation of value system preferences to departmental affiliation which revealed significant relations in line with prior hypothesizing. These findings demonstrate the good psychometric properties of the MVSQ and support its application in the assessment of value systems in work-related contexts.
Kopczak, Anna; Krewer, Carmen; Schneider, Manfred; Kreitschmann-Andermahr, Ilonka; Schneider, Harald Jörn; Stalla, Günter Karl
2015-01-01
Previous reports suggest that neuroendocrine disturbances in patients with traumatic brain injury (TBI) or aneurysmal subarachnoid hemorrhage (SAH) may still develop or resolve months or even years after the trauma. We investigated a cohort of n = 168 patients (81 patients after TBI and 87 patients after SAH) in whom hormone levels had been determined at various time points to assess the course and pattern of hormonal insufficiencies. Data were analyzed using three different criteria: (1) patients with lowered basal laboratory values; (2) patients with lowered basal laboratory values or the need for hormone replacement therapy; (3) diagnosis of the treating physician. The first hormonal assessment after a median time of three months after the injury showed lowered hormone laboratory test results in 35% of cases. Lowered testosterone (23.1% of male patients), lowered estradiol (14.3% of female patients) and lowered insulin-like growth factor I (IGF-I) values (12.1%) were most common. Using Criterion 2, a higher prevalence rate of 55.6% of cases was determined, which correlated well with the prevalence rate of 54% of cases using the physicians’ diagnosis as the criterion. Intraindividual changes (new onset insufficiency or recovery) were predominantly observed for the somatotropic axis (12.5%), the gonadotropic axis in women (11.1%) and the corticotropic axis (10.6%). Patients after TBI showed more often lowered IGF-I values at first testing, but normal values at follow-up (p < 0.0004). In general, most patients remained stable. Stable hormone results at follow-up were obtained in 78% (free thyroxine (fT4) values) to 94.6% (prolactin values). PMID:26703585
NASA Astrophysics Data System (ADS)
Zhou, Jialing; He, Honghui; Wang, Ye; Ma, Hui
2017-02-01
Fiber structure changes in the various pathological processes, such as the increase of fibrosis in liver diseases, the derangement of fiber in cervical cancer and so on. Currently, clinical pathologic diagnosis is regarded as the golden criterion, but different doctors with discrepancy in knowledge and experience may obtain different conclusions. Up to a point, quantitative evaluation of the fiber structure in the pathological tissue can be of great service to quantitative diagnosis. Mueller matrix measurement is capable of probing comprehensive microstructural information of samples and different wavelength of lights can provide more information. In this paper, we use a Mueller matrix microscope with light sources in six different wavelength. We use unstained, dewaxing liver tissue slices in four stages and the pathological biopsy of the filtration channels from rabbit eyes as samples. We apply the Mueller matrix polar decomposition (MMPD) parameter δ which corresponds to retardance to liver slices. The mean value of abnormal region get bigger when the level of fibrosis get higher and light in short wavelength is more sensitive to the microstructure of fiber. On the other hand, we use the Mueller matrix transformation (MMT) parameter Φ which is associated to the angel of fast axis in the analysis of the slices of the filtration channels from rabbit eyes. The value of kurtosis and the value of skewness shows big difference between new born region and normal region and can reveal the arrangement of fiber. These results indicate that the Mueller matrix microscope has great potential in auxiliary diagnosis.
Luminaire layout: Design and implementation
NASA Technical Reports Server (NTRS)
Both, A. J.
1994-01-01
The information contained in this report was presented during the discussion regarding guidelines for PAR uniformity in greenhouses. The data shows a lighting uniformity analysis in a research greenhouse for rose production at the Cornell University campus. The luminaire layout was designed using the computer program Lumen-Micro. After implementation of the design, accurate measurements were taken in the greenhouse and the uniformity analysis for both the design and implementation were compared. A study of several supplemental lighting installations resulted in the following recommendations: include only the actual growing area in the lighting uniformity analysis; for growing areas up to 20 square meters, take four measurements per square meter; for growing areas above 20 square meters, take one measurement per square meter; use one of the uniformity criteria and frequency graphs to compare lighting uniformity amongst designs; and design for uniformity criterion of a least 0.75 and the fraction within +/- 15% of the average PAR value should be close to one.
Habitat use affects morphological diversification in dragon lizards
COLLAR, D C; SCHULTE, J A; O’MEARA, B C; LOSOS, J B
2010-01-01
Habitat use may lead to variation in diversity among evolutionary lineages because habitats differ in the variety of ways they allow for species to make a living. Here, we show that structural habitats contribute to differential diversification of limb and body form in dragon lizards (Agamidae). Based on phylogenetic analysis and ancestral state reconstructions for 90 species, we find that multiple lineages have independently adopted each of four habitat use types: rock-dwelling, terrestriality, semi-arboreality and arboreality. Given these reconstructions, we fit models of evolution to species’ morphological trait values and find that rock-dwelling and arboreality limit diversification relative to terrestriality and semi-arboreality. Models preferred by Akaike information criterion infer slower rates of size and shape evolution in lineages inferred to occupy rocks and trees, and model-averaged rate estimates are slowest for these habitat types. These results suggest that ground-dwelling facilitates ecomorphological differentiation and that use of trees or rocks impedes diversification. PMID:20345808
One-way ANOVA based on interval information
NASA Astrophysics Data System (ADS)
Hesamian, Gholamreza
2016-08-01
This paper deals with extending the one-way analysis of variance (ANOVA) to the case where the observed data are represented by closed intervals rather than real numbers. In this approach, first a notion of interval random variable is introduced. Especially, a normal distribution with interval parameters is introduced to investigate hypotheses about the equality of interval means or test the homogeneity of interval variances assumption. Moreover, the least significant difference (LSD method) for investigating multiple comparison of interval means is developed when the null hypothesis about the equality of means is rejected. Then, at a given interval significance level, an index is applied to compare the interval test statistic and the related interval critical value as a criterion to accept or reject the null interval hypothesis of interest. Finally, the method of decision-making leads to some degrees to accept or reject the interval hypotheses. An applied example will be used to show the performance of this method.
Risley, John C.; Granato, Gregory E.
2014-01-01
6. An analysis of the use of grab sampling and nonstochastic upstream modeling methods was done to evaluate the potential effects on modeling outcomes. Additional analyses using surrogate water-quality datasets for the upstream basin and highway catchment were provided for six Oregon study sites to illustrate the risk-based information that SELDM will produce. These analyses show that the potential effects of highway runoff on receiving-water quality downstream of the outfall depends on the ratio of drainage areas (dilution), the quality of the receiving water upstream of the highway, and the concentration of the criteria of the constituent of interest. These analyses also show that the probability of exceeding a water-quality criterion may depend on the input statistics used, thus careful selection of representative values is important.
Probing dark energy in the scope of a Bianchi type I spacetime
NASA Astrophysics Data System (ADS)
Amirhashchi, Hassan
2018-03-01
It is well known that the flat Friedmann-Robertson-Walker metric is a special case of Bianchi type I spacetime. In this paper, we use 38 Hubble parameter, H (z ), measurements at intermediate redshifts 0.07 ≤z ≤2.36 and its joint combination with the latest "joint light curves" (JLA) sample, comprising 740 type Ia supernovae in the redshift range of z ɛ [0.01 ,1.30 ] to constrain the parameters of the Bianchi type I dark energy model. We also use the same datasets to constrain flat a Λ CDM model. In both cases, we specifically address the expansion rate H0 as well as the transition redshift zt determinations out of these measurements. In both models, we found that using joint combination of datasets gives rise to lower values for model parameters. Also to compare the considered cosmologies, we have made Akaike information criterion and Bayes factor (Ψ ) tests.
Methods of comparing associative models and an application to retrospective revaluation.
Witnauer, James E; Hutchings, Ryan; Miller, Ralph R
2017-11-01
Contemporary theories of associative learning are increasingly complex, which necessitates the use of computational methods to reveal predictions of these models. We argue that comparisons across multiple models in terms of goodness of fit to empirical data from experiments often reveal more about the actual mechanisms of learning and behavior than do simulations of only a single model. Such comparisons are best made when the values of free parameters are discovered through some optimization procedure based on the specific data being fit (e.g., hill climbing), so that the comparisons hinge on the psychological mechanisms assumed by each model rather than being biased by using parameters that differ in quality across models with respect to the data being fit. Statistics like the Bayesian information criterion facilitate comparisons among models that have different numbers of free parameters. These issues are examined using retrospective revaluation data. Copyright © 2017 Elsevier B.V. All rights reserved.
Vidal, Fernando
2018-03-01
Argument "Deficit model" designates an outlook on the public understanding and communication of science that emphasizes scientific illiteracy and the need to educate the public. Though criticized, it is still widespread, especially among scientists. Its persistence is due not only to factors ranging from scientists' training to policy design, but also to the continuance of realism as an aesthetic criterion. This article examines the link between realism and the deficit model through discussions of neurology and psychiatry in fiction film, as well as through debates about historical movies and the cinematic adaptation of literature. It shows that different values and criteria tend to dominate the realist stance in different domains: accuracy for movies concerning neurology and psychiatry, authenticity for the historical film, and fidelity for adaptations of literature. Finally, contrary to the deficit model, it argues that the cinema is better characterized by a surplus of meaning than by informational shortcomings.
Mixture Model and MDSDCA for Textual Data
NASA Astrophysics Data System (ADS)
Allouti, Faryel; Nadif, Mohamed; Hoai An, Le Thi; Otjacques, Benoît
E-mailing has become an essential component of cooperation in business. Consequently, the large number of messages manually produced or automatically generated can rapidly cause information overflow for users. Many research projects have examined this issue but surprisingly few have tackled the problem of the files attached to e-mails that, in many cases, contain a substantial part of the semantics of the message. This paper considers this specific topic and focuses on the problem of clustering and visualization of attached files. Relying on the multinomial mixture model, we used the Classification EM algorithm (CEM) to cluster the set of files, and MDSDCA to visualize the obtained classes of documents. Like the Multidimensional Scaling method, the aim of the MDSDCA algorithm based on the Difference of Convex functions is to optimize the stress criterion. As MDSDCA is iterative, we propose an initialization approach to avoid starting with random values. Experiments are investigated using simulations and textual data.
Killiches, Matthias; Czado, Claudia
2018-03-22
We propose a model for unbalanced longitudinal data, where the univariate margins can be selected arbitrarily and the dependence structure is described with the help of a D-vine copula. We show that our approach is an extremely flexible extension of the widely used linear mixed model if the correlation is homogeneous over the considered individuals. As an alternative to joint maximum-likelihood a sequential estimation approach for the D-vine copula is provided and validated in a simulation study. The model can handle missing values without being forced to discard data. Since conditional distributions are known analytically, we easily make predictions for future events. For model selection, we adjust the Bayesian information criterion to our situation. In an application to heart surgery data our model performs clearly better than competing linear mixed models. © 2018, The International Biometric Society.
A testable model of earthquake probability based on changes in mean event size
NASA Astrophysics Data System (ADS)
Imoto, Masajiro
2003-02-01
We studied changes in mean event size using data on microearthquakes obtained from a local network in Kanto, central Japan, from a viewpoint that a mean event size tends to increase as the critical point is approached. A parameter describing changes was defined using a simple weighting average procedure. In order to obtain the distribution of the parameter in the background, we surveyed values of the parameter from 1982 to 1999 in a 160 × 160 × 80 km volume. The 16 events of M5.5 or larger in this volume were selected as target events. The conditional distribution of the parameter was estimated from the 16 values, each of which referred to the value immediately prior to each target event. The distribution of the background becomes a function of symmetry, the center of which corresponds to no change in b value. In contrast, the conditional distribution exhibits an asymmetric feature, which tends to decrease the b value. The difference in the distributions between the two groups was significant and provided us a hazard function for estimating earthquake probabilities. Comparing the hazard function with a Poisson process, we obtained an Akaike Information Criterion (AIC) reduction of 24. This reduction agreed closely with the probability gains of a retrospective study in a range of 2-4. A successful example of the proposed model can be seen in the earthquake of 3 June 2000, which is the only event during the period of prospective testing.
Shen, Chung-Wei; Chen, Yi-Hau
2018-03-13
We propose a model selection criterion for semiparametric marginal mean regression based on generalized estimating equations. The work is motivated by a longitudinal study on the physical frailty outcome in the elderly, where the cluster size, that is, the number of the observed outcomes in each subject, is "informative" in the sense that it is related to the frailty outcome itself. The new proposal, called Resampling Cluster Information Criterion (RCIC), is based on the resampling idea utilized in the within-cluster resampling method (Hoffman, Sen, and Weinberg, 2001, Biometrika 88, 1121-1134) and accommodates informative cluster size. The implementation of RCIC, however, is free of performing actual resampling of the data and hence is computationally convenient. Compared with the existing model selection methods for marginal mean regression, the RCIC method incorporates an additional component accounting for variability of the model over within-cluster subsampling, and leads to remarkable improvements in selecting the correct model, regardless of whether the cluster size is informative or not. Applying the RCIC method to the longitudinal frailty study, we identify being female, old age, low income and life satisfaction, and chronic health conditions as significant risk factors for physical frailty in the elderly. © 2018, The International Biometric Society.
Saha, Tulshi D; Chou, S Patricia; Grant, Bridget F
2006-07-01
Item response theory (IRT) was used to determine whether the DSM-IV diagnostic criteria for alcohol abuse and dependence are arrayed along a continuum of severity. Data came from a large nationally representative sample of the US population, 18 years and older. A two-parameter logistic IRT model was used to determine the severity and discrimination of each DSM-IV criterion. Differential criterion functioning (DCF) was also assessed across subgroups of the population defined by sex, age and race-ethnicity. All DSM-IV alcohol abuse and dependence criteria, except alcohol-related legal problems, formed a continuum of alcohol use disorder severity. Abuse and dependence criteria did not consistently tap the mildest or more severe end of the continuum respectively, and several criteria were identified as potentially redundant. The drinking in larger amounts or for longer than intended dependence criterion had the greatest discrimination and lowest severity than any other criterion. Although several criteria were found to function differentially between subgroups defined in terms of sex and age, there was evidence that the generalizability and validity of the criterion forming the continuum remained intact at the test score level. DSM-IV diagnostic criteria for alcohol abuse and dependence form a continuum of severity, calling into question the abuse-dependence distinction in the DSM-IV and the interpretation of abuse as a milder disorder than dependence. The criteria tapped the more severe end of the alcohol use disorder continuum, highlighting the need to identify other criteria capturing the mild to intermediate range of the severity. The drinking larger amounts or longer than intended dependence criterion may be a bridging criterion between drinking patterns that incur risk of alcohol use disorder at the milder end of the continuum, with tolerance, withdrawal, impaired control and serious social and occupational dysfunction at the more severe end of the alcohol use disorder continuum. Future IRT and other dimensional analyses hold great promise in informing revisions to categorical classifications and constructing new dimensional classifications of alcohol use disorders based on the DSM and the ICD.
Maximum likelihood-based analysis of single-molecule photon arrival trajectories.
Hajdziona, Marta; Molski, Andrzej
2011-02-07
In this work we explore the statistical properties of the maximum likelihood-based analysis of one-color photon arrival trajectories. This approach does not involve binning and, therefore, all of the information contained in an observed photon strajectory is used. We study the accuracy and precision of parameter estimates and the efficiency of the Akaike information criterion and the Bayesian information criterion (BIC) in selecting the true kinetic model. We focus on the low excitation regime where photon trajectories can be modeled as realizations of Markov modulated Poisson processes. The number of observed photons is the key parameter in determining model selection and parameter estimation. For example, the BIC can select the true three-state model from competing two-, three-, and four-state kinetic models even for relatively short trajectories made up of 2 × 10(3) photons. When the intensity levels are well-separated and 10(4) photons are observed, the two-state model parameters can be estimated with about 10% precision and those for a three-state model with about 20% precision.
NASA Astrophysics Data System (ADS)
Narukawa, Takafumi; Yamaguchi, Akira; Jang, Sunghyon; Amaya, Masaki
2018-02-01
For estimating fracture probability of fuel cladding tube under loss-of-coolant accident conditions of light-water-reactors, laboratory-scale integral thermal shock tests were conducted on non-irradiated Zircaloy-4 cladding tube specimens. Then, the obtained binary data with respect to fracture or non-fracture of the cladding tube specimen were analyzed statistically. A method to obtain the fracture probability curve as a function of equivalent cladding reacted (ECR) was proposed using Bayesian inference for generalized linear models: probit, logit, and log-probit models. Then, model selection was performed in terms of physical characteristics and information criteria, a widely applicable information criterion and a widely applicable Bayesian information criterion. As a result, it was clarified that the log-probit model was the best among the three models to estimate the fracture probability in terms of the degree of prediction accuracy for both next data to be obtained and the true model. Using the log-probit model, it was shown that 20% ECR corresponded to a 5% probability level with a 95% confidence of fracture of the cladding tube specimens.
Information theoretic methods for image processing algorithm optimization
NASA Astrophysics Data System (ADS)
Prokushkin, Sergey F.; Galil, Erez
2015-01-01
Modern image processing pipelines (e.g., those used in digital cameras) are full of advanced, highly adaptive filters that often have a large number of tunable parameters (sometimes > 100). This makes the calibration procedure for these filters very complex, and the optimal results barely achievable in the manual calibration; thus an automated approach is a must. We will discuss an information theory based metric for evaluation of algorithm adaptive characteristics ("adaptivity criterion") using noise reduction algorithms as an example. The method allows finding an "orthogonal decomposition" of the filter parameter space into the "filter adaptivity" and "filter strength" directions. This metric can be used as a cost function in automatic filter optimization. Since it is a measure of a physical "information restoration" rather than perceived image quality, it helps to reduce the set of the filter parameters to a smaller subset that is easier for a human operator to tune and achieve a better subjective image quality. With appropriate adjustments, the criterion can be used for assessment of the whole imaging system (sensor plus post-processing).
NASA Astrophysics Data System (ADS)
Ma, Yuanxu; Huang, He Qing
2016-07-01
Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.
Phase object imaging inside the airy disc
NASA Astrophysics Data System (ADS)
Tychinsky, Vladimir P.
1991-03-01
The possibility of phase objects superresoluton imaging is theoretically justifieth The measurements with CPM " AIRYSCAN" showed the reality of O structures observations when the Airy disc di ameter i s 0 86 j. . m SUMMARY It has been known that the amount of information contained in the image of any object is mostly determined by the number of points measured i ndependentl y or by spati al resol uti on of the system. From the classic theory of the optical systems it follows that for noncoherent sources the -spatial resolution is limited by the aperture dd 6LX/N. A. ( Rayleigh criterion where X is wave length NA numerical aperture. ) The use of this criterion is equivalent tO the statement that any object inside the Airy disc of radius d that is the difraction image of a point is practical ly unresolved. However at the coherent illumination the intensity distribution in the image plane depends also upon the phase iq (r) of the wave scattered by the object and this is the basis of the Zernike method of phasecontrast microscopy differential interference contrast (DIC) and computer phase microscopy ( CPM ). In theoretical foundation of these methods there was no doubt in the correctness of Rayleigh criterion since the phase information is derived out of intensity distribution and as we know there were no experiments that disproved this
The Influence of Specimen Type on Tensile Fracture Toughness of Rock Materials
NASA Astrophysics Data System (ADS)
Aliha, Mohammad Reza Mohammad; Mahdavi, Eqlima; Ayatollahi, Majid Reza
2017-03-01
Up to now, several methods have been proposed to determine the mode I fracture toughness of rocks. In this research, different cylindrical and disc shape samples, namely: chevron bend (CB), short rod (SR), cracked chevron notched Brazilian disc (CCNBD), and semi-circular bend (SCB) specimens were considered for investigating mode I fracture behavior of a marble rock. It is shown experimentally that the fracture toughness values of the tested rock material obtained from different test specimens are not consistent. Indeed, depending on the geometry and loading type of the specimen, noticeable discrepancies can be observed for the fracture toughness of a same rock material. The difference between the experimental mode I fracture resistance results is related to the magnitude and sign of T-stress that is dependent on the geometry and loading configuration of the specimen. For the chevron-notched samples, the critical value of T-stress corresponding to the critical crack length was determined using the finite element method. The CCNBD and SR specimens had the most negative and positive T-stress values, respectively. The dependency of mode I fracture resistance to the T-stress was shown using the extended maximum tangential strain (EMTSN) criterion and the obtained experimental rock fracture toughness data were predicted successfully with this criterion.
An operational definition of a statistically meaningful trend.
Bryhn, Andreas C; Dimberg, Peter H
2011-04-28
Linear trend analysis of time series is standard procedure in many scientific disciplines. If the number of data is large, a trend may be statistically significant even if data are scattered far from the trend line. This study introduces and tests a quality criterion for time trends referred to as statistical meaningfulness, which is a stricter quality criterion for trends than high statistical significance. The time series is divided into intervals and interval mean values are calculated. Thereafter, r(2) and p values are calculated from regressions concerning time and interval mean values. If r(2) ≥ 0.65 at p ≤ 0.05 in any of these regressions, then the trend is regarded as statistically meaningful. Out of ten investigated time series from different scientific disciplines, five displayed statistically meaningful trends. A Microsoft Excel application (add-in) was developed which can perform statistical meaningfulness tests and which may increase the operationality of the test. The presented method for distinguishing statistically meaningful trends should be reasonably uncomplicated for researchers with basic statistics skills and may thus be useful for determining which trends are worth analysing further, for instance with respect to causal factors. The method can also be used for determining which segments of a time trend may be particularly worthwhile to focus on.
The distribution over time of costs and social net benefits for pertussis immunization programs.
Girard, Dorota Zdanowska
2010-03-01
The cost of a six-dose pertussis immunization programs for children and adolescents is investigated in relation to estimators of the price of acellular vaccine, the value of a child's life, levels of vaccination rate and discount rates. We compare the cost of the program maintained over time at 90% with three alternative strategies, each involving a decrease in vaccination coverage. Data from England and Wales, 1966-2005, is used to formalize a delay in occurrence of pertussis cases as a result of a fall in coverage. We first apply the criterion of minimization of the total social cost of pertussis to identify the best cost saving immunization strategy. The results are also discussed in form of the discounted present value of the total social net benefits. We find that the discounted present value of the total social net benefit is maximized when a stable vaccination program at 90% is compared to a gradual decrease in vaccination coverage leading to the lowest vaccination rate. The benefits to society of providing sustained immunization strategy, vaccinating the highest proportion of children and adolescents, are systematically proved on the basis of the second optimisation criterion, independently of the level of estimators applied during economic evaluation for the cost variables.
A Study of the Congruency of Competencies and Criterion-Referenced Measures.
ERIC Educational Resources Information Center
Jones, John Wilbur, Jr.
The job of the 4-H extension agent involves fairly complex levels of performance. The curriculum for the extension agent program should produce youth workers who have the ability to perform competently and who possess the basic concepts and values required to function effectively. Performance objectives were written for each competency considered…
Organizational Culture in Educational Institutions
ERIC Educational Resources Information Center
Efeoglu, I. Efe; Ulum, Ömer Gökhan
2017-01-01
The concept of culture closely refers to a wide scope of effects on how individuals act in a group, an institution, or a public place. Chiefly, it covers a range of universal ideas, beliefs, values, behaviors, criterion, and measures which may be both explicit and implicit. The study on organizational culture has gained much attention among…
Equivalent Ear Canal Volumes in Children Pre- and Post-Tympanostomy Tube Insertion.
ERIC Educational Resources Information Center
Shanks, Janet E.; And Others
1992-01-01
Evaluation of preoperative and postoperative equivalent ear canal volume measures on 334 children (ages 6 weeks to 6.7 years) with chronic otitis media with effusion found that the determination could be made very accurately for children 4 years and older. Criterion values for tympanic membrane perforation and preoperative and postoperative…
Diagnostic Efficiency of "DSM-IV" Indicators for Binge Eating Episodes
ERIC Educational Resources Information Center
White, Marney A.; Grilo, Carlos M.
2011-01-01
Ceach indicator criterion in separate analyses comparing BED, BN, and combined BED + BN groups relative to controls. Results: PPPs and NPPs suggest all of the indicators have predictive value, with "eating alone because embarrassed" (PPP = 0.80) "and feeling disgusted" (NPP = 0.93) performing as the best inclusion and exclusion criteria,…
There has been an ongoing dilemma for agencies who set criteria for safe recreational waters in how to provide for a seasonal assessment of a beach site versus guidance for day-to-day management. Typically an overall 'safe' criterion level is derived from epidemiologic studies o...
Economic efficiency of fire management programs at six National Forests
Dennis L. Schweitzer; Ernest V. Andersen; Thomas J. Mills
1982-01-01
Two components of fire management programs were analyzed at these Forests: Francis Marion (South Carolina), Huron-Manistee (Michigan), San Bernardino (California), Tonto (Arizona), and Deschutes and Willamette (Oregon). Initial attack and aviation operations were evaluated by the criterion of minimizing the program cost plus the net value change of resource outputs and...
Study of natural radioactivity in Mansehra granite, Pakistan: environmental concerns.
Qureshi, Aziz Ahmed; Jadoon, Ishtiaq Ahmed Khan; Wajid, Ali Abbas; Attique, Ahsan; Masood, Adil; Anees, Muhammad; Manzoor, Shahid; Waheed, Abdul; Tubassam, Aneela
2014-03-01
A part of Mansehra Granite was selected for the assessment of radiological hazards. The average activity concentrations of (226)Ra, (232)Th and (40)K were found to be 27.32, 50.07 and 953.10 Bq kg(-1), respectively. These values are in the median range when compared with the granites around the world. Radiological hazard indices and annual effective doses were estimated. All of these indices were found to be within the criterion limits except outdoor external dose (82.38 nGy h(-1)) and indoor external dose (156.04 nGy h(-1)), which are higher than the world's average background levels of 51 and 55 nGy h(-1), respectively. These values correspond to an average annual effective dose of 0.867 mSv y(-1), which is less than the criterion limit of 1 mSv y(-1) (ICRP-103). Some localities in the Mansehra city have annual effective dose higher than the limit of 1 mSv y(-1). Overall, the Mansehra Granite does not pose any significant radiological health hazard in the outdoor or indoor.
Perrier, E T; Bottin, J H; Vecchio, M; Lemetais, G
2017-04-01
Growing evidence suggests a distinction between water intake necessary for maintaining a euhydrated state, and water intake considered to be adequate from a perspective of long-term health. Previously, we have proposed that maintaining a 24-h urine osmolality (U Osm ) of ⩽500 mOsm/kg is a desirable target for urine concentration to ensure sufficient urinary output to reduce renal health risk and circulating vasopressin. In clinical practice and field monitoring, the measurement of U Osm is not practical. In this analysis, we calculate criterion values for urine-specific gravity (U SG ) and urine color (U Col ), two measures which have broad applicability in clinical and field settings. A receiver operating characteristic curve analysis performed on 817 urine samples demonstrates that a U SG ⩾1.013 detects U Osm >500 mOsm/kg with very high accuracy (AUC 0.984), whereas a subject-assessed U Col ⩾4 offers high sensitivity and moderate specificity (AUC 0.831) for detecting U Osm >500 m Osm/kg.
Mull, Hillary J; Borzecki, Ann M; Loveland, Susan; Hickson, Kathleen; Chen, Qi; MacDonald, Sally; Shin, Marlena H; Cevasco, Marisa; Itani, Kamal M F; Rosen, Amy K
2014-04-01
The Patient Safety Indicators (PSIs) use administrative data to screen for select adverse events (AEs). In this study, VA Surgical Quality Improvement Program (VASQIP) chart review data were used as the gold standard to measure the criterion validity of 5 surgical PSIs. Independent chart review was also used to determine reasons for PSI errors. The sensitivity, specificity, and positive predictive value of PSI software version 4.1a were calculated among Veterans Health Administration hospitalizations (2003-2007) reviewed by VASQIP (n = 268,771). Nurses re-reviewed a sample of hospitalizations for which PSI and VASQIP AE detection disagreed. Sensitivities ranged from 31% to 68%, specificities from 99.1% to 99.8%, and positive predictive values from 31% to 72%. Reviewers found that coding errors accounted for some PSI-VASQIP disagreement; some disagreement was also the result of differences in AE definitions. These results suggest that the PSIs have moderate criterion validity; however, some surgical PSIs detect different AEs than VASQIP. Future research should explore using both methods to evaluate surgical quality. Published by Elsevier Inc.
Abajian, Aaron; Murali, Nikitha; Savic, Lynn Jeanette; Laage-Gaupp, Fabian Max; Nezami, Nariman; Duncan, James S; Schlachter, Todd; Lin, MingDe; Geschwind, Jean-François; Chapiro, Julius
2018-06-01
To use magnetic resonance (MR) imaging and clinical patient data to create an artificial intelligence (AI) framework for the prediction of therapeutic outcomes of transarterial chemoembolization by applying machine learning (ML) techniques. This study included 36 patients with hepatocellular carcinoma (HCC) treated with transarterial chemoembolization. The cohort (age 62 ± 8.9 years; 31 men; 13 white; 24 Eastern Cooperative Oncology Group performance status 0, 10 status 1, 2 status 2; 31 Child-Pugh stage A, 4 stage B, 1 stage C; 1 Barcelona Clinic Liver Cancer stage 0, 12 stage A, 10 stage B, 13 stage C; tumor size 5.2 ± 3.0 cm; number of tumors 2.6 ± 1.1; and 30 conventional transarterial chemoembolization, 6 with drug-eluting embolic agents). MR imaging was obtained before and 1 month after transarterial chemoembolization. Image-based tumor response to transarterial chemoembolization was assessed with the use of the 3D quantitative European Association for the Study of the Liver (qEASL) criterion. Clinical information, baseline imaging, and therapeutic features were used to train logistic regression (LR) and random forest (RF) models to predict patients as treatment responders or nonresponders under the qEASL response criterion. The performance of each model was validated using leave-one-out cross-validation. Both LR and RF models predicted transarterial chemoembolization treatment response with an overall accuracy of 78% (sensitivity 62.5%, specificity 82.1%, positive predictive value 50.0%, negative predictive value 88.5%). The strongest predictors of treatment response included a clinical variable (presence of cirrhosis) and an imaging variable (relative tumor signal intensity >27.0). Transarterial chemoembolization outcomes in patients with HCC may be predicted before procedures by combining clinical patient data and baseline MR imaging with the use of AI and ML techniques. Copyright © 2018 SIR. Published by Elsevier Inc. All rights reserved.
Dowd, Kieran P.; Harrington, Deirdre M.; Donnelly, Alan E.
2012-01-01
Background The activPAL has been identified as an accurate and reliable measure of sedentary behaviour. However, only limited information is available on the accuracy of the activPAL activity count function as a measure of physical activity, while no unit calibration of the activPAL has been completed to date. This study aimed to investigate the criterion validity of the activPAL, examine the concurrent validity of the activPAL, and perform and validate a value calibration of the activPAL in an adolescent female population. The performance of the activPAL in estimating posture was also compared with sedentary thresholds used with the ActiGraph accelerometer. Methodologies Thirty adolescent females (15 developmental; 15 cross-validation) aged 15–18 years performed 5 activities while wearing the activPAL, ActiGraph GT3X, and the Cosmed K4B2. A random coefficient statistics model examined the relationship between metabolic equivalent (MET) values and activPAL counts. Receiver operating characteristic analysis was used to determine activity thresholds and for cross-validation. The random coefficient statistics model showed a concordance correlation coefficient of 0.93 (standard error of the estimate = 1.13). An optimal moderate threshold of 2997 was determined using mixed regression, while an optimal vigorous threshold of 8229 was determined using receiver operating statistics. The activPAL count function demonstrated very high concurrent validity (r = 0.96, p<0.01) with the ActiGraph count function. Levels of agreement for sitting, standing, and stepping between direct observation and the activPAL and ActiGraph were 100%, 98.1%, 99.2% and 100%, 0%, 100%, respectively. Conclusions These findings suggest that the activPAL is a valid, objective measurement tool that can be used for both the measurement of physical activity and sedentary behaviours in an adolescent female population. PMID:23094069
NASA Astrophysics Data System (ADS)
Li, Yue; Bai, Xiao Yong; Jie Wang, Shi; Qin, Luo Yi; Chao Tian, Yi; Jie Luo, Guang
2017-05-01
Soil loss tolerance (T value) is one of the criteria in determining the necessity of erosion control measures and ecological restoration strategy. However, the validity of this criterion in subtropical karst regions is strongly disputed. In this study, T value is calculated based on soil formation rate by using a digital distribution map of carbonate rock assemblage types. Results indicated a spatial heterogeneity and diversity in soil loss tolerance. Instead of only one criterion, a minimum of three criteria should be considered when investigating the carbonate areas of southern China because the one region, one T value
concept may not be applicable to this region. T value is proportionate to the amount of argillaceous material, which determines the surface soil thickness of the formations in homogenous carbonate rock areas. Homogenous carbonate rock, carbonate rock intercalated with clastic rock areas and carbonate/clastic rock alternation areas have T values of 20, 50 and 100 t/(km2 a), and they are extremely, severely and moderately sensitive to soil erosion. Karst rocky desertification (KRD) is defined as extreme soil erosion and reflects the risks of erosion. Thus, the relationship between T value and erosion risk is determined using KRD as a parameter. The existence of KRD land is unrelated to the T value, although this parameter indicates erosion sensitivity. Erosion risk is strongly dependent on the relationship between real soil loss (RL) and T value rather than on either erosion intensity or the T value itself. If RL > > T, then the erosion risk is high despite of a low RL. Conversely, if T > > RL, then the soil is safe although RL is high. Overall, these findings may clarify the heterogeneity of T value and its effect on erosion risk in a karst environment.
Diffusion theory of decision making in continuous report.
Smith, Philip L
2016-07-01
I present a diffusion model for decision making in continuous report tasks, in which a continuous, circularly distributed, stimulus attribute in working memory is matched to a representation of the attribute in the stimulus display. Memory retrieval is modeled as a 2-dimensional diffusion process with vector-valued drift on a disk, whose bounding circle represents the decision criterion. The direction and magnitude of the drift vector describe the identity of the stimulus and the quality of its representation in memory, respectively. The point at which the diffusion exits the disk determines the reported value of the attribute and the time to exit the disk determines the decision time. Expressions for the joint distribution of decision times and report outcomes are obtained by means of the Girsanov change-of-measure theorem, which allows the properties of the nonzero-drift diffusion process to be characterized as a function of a Euclidian-distance Bessel process. Predicted report precision is equal to the product of the decision criterion and the drift magnitude and follows a von Mises distribution, in agreement with the treatment of precision in the working memory literature. Trial-to-trial variability in criterion and drift rate leads, respectively, to direct and inverse relationships between report accuracy and decision times, in agreement with, and generalizing, the standard diffusion model of 2-choice decisions. The 2-dimensional model provides a process account of working memory precision and its relationship with the diffusion model, and a new way to investigate the properties of working memory, via the distributions of decision times. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Cuesta-Vargas, Antonio Ignacio; González-Sánchez, Manuel
2014-10-29
Spanish is one of the five most spoken languages in the world. There is currently no published Spanish version of the Örebro Musculoskeletal Pain Questionnaire (OMPQ). The aim of the present study is to describe the process of translating the OMPQ into Spanish and to perform an analysis of reliability, internal structure, internal consistency and concurrent criterion-related validity. Translation and psychometric testing. Two independent translators translated the OMPQ into Spanish. From both translations a consensus version was achieved. A backward translation was made to verify and resolve any semantic or conceptual problems. A total of 104 patients (67 men/37 women) with a mean age of 53.48 (±11.63), suffering from chronic musculoskeletal disorders, twice completed a Spanish version of the OMPQ. Statistical analysis was performed to evaluate the reliability, the internal structure, internal consistency and concurrent criterion-related validity with reference to the gold standard questionnaire SF-12v2. All variables except "Coping" showed a rate above 0.85 on reliability. The internal structure calculation through exploratory factor analysis indicated that 75.2% of the variance can be explained with six components with an eigenvalue higher than 1 and 52.1% with only three components higher than 10% of variance explained. In the concurrent criterion-related validity, several significant correlations were seen close to 0.6, exceeding that value in the correlation between general health and total value of the OMPQ. The Spanish version of the screening questionnaire OMPQ can be used to identify Spanish patients with musculoskeletal pain at risk of developing a chronic disability.
Integration of an EEG biomarker with a clinician's ADHD evaluation
Snyder, Steven M; Rugino, Thomas A; Hornig, Mady; Stein, Mark A
2015-01-01
Background This study is the first to evaluate an assessment aid for attention-deficit/hyperactivity disorder (ADHD) according to both Class-I evidence standards of American Academy of Neurology and De Novo requirements of US Food and Drug Administration. The assessment aid involves a method to integrate an electroencephalographic (EEG) biomarker, theta/beta ratio (TBR), with a clinician's ADHD evaluation. The integration method is intended as a step to help improve certainty with criterion E (i.e., whether symptoms are better explained by another condition). Methods To evaluate the assessment aid, investigators conducted a prospective, triple-blinded, 13-site, clinical cohort study. Comprehensive clinical evaluation data were obtained from 275 children and adolescents presenting with attentional and behavioral concerns. A qualified clinician at each site performed differential diagnosis. EEG was collected by separate teams. The reference standard was consensus diagnosis by an independent, multidisciplinary team (psychiatrist, psychologist, and neurodevelopmental pediatrician), which is well-suited to evaluate criterion E in a complex clinical population. Results Of 209 patients meeting ADHD criteria per a site clinician's judgment, 93 were separately found by the multidisciplinary team to be less likely to meet criterion E, implying possible overdiagnosis by clinicians in 34% of the total clinical sample (93/275). Of those 93, 91% were also identified by EEG, showing a relatively lower TBR (85/93). Further, the integration method was in 97% agreement with the multidisciplinary team in the resolution of a clinician's uncertain cases (35/36). TBR showed statistical power specific to supporting certainty of criterion E per the multidisciplinary team (Cohen's d, 1.53). Patients with relatively lower TBR were more likely to have other conditions that could affect criterion E certainty (10 significant results; P ≤ 0.05). Integration of this information with a clinician's ADHD evaluation could help improve diagnostic accuracy from 61% to 88%. Conclusions The EEG-based assessment aid may help improve accuracy of ADHD diagnosis by supporting greater criterion E certainty. PMID:25798338
The stressor criterion for posttraumatic stress disorder: Does it matter?
Roberts, Andrea L.; Dohrenwend, Bruce P.; Aiello, Allison; Wright, Rosalind J.; Maercker, Andreas; Galea, Sandro; Koenen, Karestan C.
2013-01-01
Objective The definition of the stressor criterion for posttraumatic stress disorder (“Criterion A1”) is hotly debated with major revisions being considered for DSM-V. We examine whether symptoms, course, and consequences of PTSD vary predictably with the type of stressful event that precipitates symptoms. Method We used data from the 2009 PTSD diagnostic subsample (N=3,013) of the Nurses Health Study II. We asked respondents about exposure to stressful events qualifying under 1) DSM-III, 2) DSM-IV, or 3) not qualifying under DSM Criterion A1. Respondents selected the event they considered worst and reported subsequent PTSD symptoms. Among participants who met all other DSM-IV PTSD criteria, we compared distress, symptom severity, duration, impairment, receipt of professional help, and nine physical, behavioral, and psychiatric sequelae (e.g. physical functioning, unemployment, depression) by precipitating event group. Various assessment tools were used to determine fulfillment of PTSD Criteria B through F and to assess these 14 outcomes. Results Participants with PTSD from DSM-III events reported on average 1 more symptom (DSM-III mean=11.8 symptoms, DSM-IV=10.7, non-DSM=10.9) and more often reported symptoms lasted one year or longer compared to participants with PTSD from other groups. However, sequelae of PTSD did not vary systematically with precipitating event type. Conclusions Results indicate the stressor criterion as defined by the DSM may not be informative in characterizing PTSD symptoms and sequelae. In the context of ongoing DSM-V revision, these results suggest that Criterion A1 could be expanded in DSM-V without much consequence for our understanding of PTSD phenomenology. Events not considered qualifying stressors under the DSM produced PTSD as consequential as PTSD following DSM-III events, suggesting PTSD may be an aberrantly severe but nonspecific stress response syndrome. PMID:22401487
Ward, S.; Augspurger, T.; Dwyer, F.J.; Kane, C.; Ingersoll, C.G.
2007-01-01
Water quality data were collected from three drainages supporting the endangered Carolina heelsplitter (Lasmigona decorata) and dwarf wedgemussel (Alasmidonta heterodon) to determine the potential for impaired water quality to limit the recovery of these freshwater mussels in North Carolina, USA. Total recoverable copper, total residual chlorine, and total ammonia nitrogen were measured every two months for approximately a year at sites bracketing wastewater sources and mussel habitat. These data and state monitoring datasets were compared with ecological screening values, including estimates of chemical concentrations likely to be protective of mussels, and federal ambient water quality criteria to assess site risks following a hazard quotient approach. In one drainage, the site-specific ammonia ecological screening value for acute exposures was exceeded in 6% of the samples, and 15% of samples exceeded the chronic ecological screening value; however, ammonia concentrations were generally below levels of concern in other drainages. In all drainages, copper concentrations were higher than ecological screening values most frequently (exceeding the ecological screening values for acute exposures in 65-94% of the samples). Chlorine concentrations exceeding the acute water quality criterion were observed in 14 and 35% of samples in two of three drainages. The ecological screening values were exceeded most frequently in Goose Creek and the Upper Tar River drainages; concentrations rarely exceeded ecological screening values in the Swift Creek drainage except for copper. The site-specific risk assessment approach provides valuable information (including site-specific risk estimates and ecological screening values for protection) that can be applied through regulatory and nonregulatory means to improve water quality for mussels where risks are indicated and pollutant threats persist. ?? 2007 SETAC.
NASA Astrophysics Data System (ADS)
Wormanns, Dag; Beyer, Florian; Hoffknecht, Petra; Dicken, Volker; Kuhnigk, Jan-Martin; Lange, Tobias; Thomas, Michael; Heindel, Walter
2005-04-01
This study was aimed to evaluate a morphology-based approach for prediction of postoperative forced expiratory volume in one second (FEV1) after lung resection from preoperative CT scans. Fifteen Patients with surgically treated (lobectomy or pneumonectomy) bronchogenic carcinoma were enrolled in the study. A preoperative chest CT and pulmonary function tests before and after surgery were performed. CT scans were analyzed by prototype software: automated segmentation and volumetry of lung lobes was performed with minimal user interaction. Determined volumes of different lung lobes were used to predict postoperative FEV1 as percentage of the preoperative values. Predicted FEV1 values were compared to the observed postoperative values as standard of reference. Patients underwent lobectomy in twelve cases (6 upper lobes; 1 middle lobe; 5 lower lobes; 6 right side; 6 left side) and pneumonectomy in three cases. Automated calculation of predicted postoperative lung function was successful in all cases. Predicted FEV1 ranged from 54% to 95% (mean 75% +/- 11%) of the preoperative values. Two cases with obviously erroneous LFT were excluded from analysis. Mean error of predicted FEV1 was 20 +/- 160 ml, indicating absence of systematic error; mean absolute error was 7.4 +/- 3.3% respective 137 +/- 77 ml/s. The 200 ml reproducibility criterion for FEV1 was met in 11 of 13 cases (85%). In conclusion, software-assisted prediction of postoperative lung function yielded a clinically acceptable agreement with the observed postoperative values. This method might add useful information for evaluation of functional operability of patients with lung cancer.
A multidimensional anisotropic strength criterion based on Kelvin modes
NASA Astrophysics Data System (ADS)
Arramon, Yves Pierre
A new theory for the prediction of multiaxial strength of anisotropic elastic materials was proposed by Biegler and Mehrabadi (1993). This theory is based on the premise that the total elastic strain energy of an anisotropic material subjected to multiaxial stress can be decomposed into dilatational and deviatoric modes. A multidimensional strength criterion may thus be formulated by postulating that the failure would occur when the energy stored in one of these modes has reached a critical value. However, the logic employed by these authors to formulate a failure criterion based on this theory could not be extended to multiaxial stress. In this thesis, an alternate criterion is presented which redresses the biaxial restriction by reformulating the surfaces of constant modal energy as surfaces of constant eigenstress magnitude. The resulting failure envelope, in a multidimensional stress space, is piecewise smooth. Each facet of the envelope is expected to represent the locus of failure data by a particular Kelvin mode. It is further shown that the Kelvin mode theory alone provides an incomplete description of the failure of some materials, but that this weakness can be addressed by the introduction of a set of complementary modes. A revised theory which combines both Kelvin and complementary modes is thus proposed and applied seven example materials: an isotropic concrete, tetragonal paperboard, two orthotropic softwoods, two orthotropic hardwoods and an orthotropic cortical bone. The resulting failure envelopes for these examples were plotted and, with the exception of concrete, shown to produce intuitively correct failure predictions.
Organizational Productivity Measurement: The Development and Evaluation of an Integrated Approach.
1987-07-01
measurement and aggregation strategy also has applications in management r information systems, performance appraisal , and other situations where multiple...larger organizational units. The basic measurement and aggregation strategy also has applications in manage- "".". ment information systems, criterion...much has been written on the subject of organizational productiv- ity, there is little consensus concerning its definition ( Tuttle , 1983). Such a lack
1982-01-01
apparent coincidence that the same normalization should do for time and uncertainty with Kenneth Arrow, Michael Boskin, Frank Hahn, Hugh Rose, Amartya ... Sen , and John Wise at various times, and the possible relationship between the structure of a criterion function and an information tree such as that
Using macroinvertebrate response to inform sediment criteria development in mountain streams
The phrase biologically-based sediment criterion indicates that biological data is used to develop regional sediment criteria that will protect and maintain self-sustaining populations of native sediment-sensitive biota. To develop biologically-based sediment criteria we must qua...
Code of Federal Regulations, 2012 CFR
2012-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2013 CFR
2013-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2014 CFR
2014-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Code of Federal Regulations, 2011 CFR
2011-07-01
... education community. (2) [Reserved] (Authority: 20 U.S.C. 1124(b)) [47 FR 14122, Apr. 1, 1982, as amended at... language at the undergraduate level. (b) The Secretary reviews each application for information that shows...
Li, Pengxiang; Kim, Michelle M; Doshi, Jalpa A
2010-08-20
The Centers for Medicare and Medicaid Services (CMS) has implemented the CMS-Hierarchical Condition Category (CMS-HCC) model to risk adjust Medicare capitation payments. This study intends to assess the performance of the CMS-HCC risk adjustment method and to compare it to the Charlson and Elixhauser comorbidity measures in predicting in-hospital and six-month mortality in Medicare beneficiaries. The study used the 2005-2006 Chronic Condition Data Warehouse (CCW) 5% Medicare files. The primary study sample included all community-dwelling fee-for-service Medicare beneficiaries with a hospital admission between January 1st, 2006 and June 30th, 2006. Additionally, four disease-specific samples consisting of subgroups of patients with principal diagnoses of congestive heart failure (CHF), stroke, diabetes mellitus (DM), and acute myocardial infarction (AMI) were also selected. Four analytic files were generated for each sample by extracting inpatient and/or outpatient claims for each patient. Logistic regressions were used to compare the methods. Model performance was assessed using the c-statistic, the Akaike's information criterion (AIC), the Bayesian information criterion (BIC) and their 95% confidence intervals estimated using bootstrapping. The CMS-HCC had statistically significant higher c-statistic and lower AIC and BIC values than the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality across all samples in analytic files that included claims from the index hospitalization. Exclusion of claims for the index hospitalization generally led to drops in model performance across all methods with the highest drops for the CMS-HCC method. However, the CMS-HCC still performed as well or better than the other two methods. The CMS-HCC method demonstrated better performance relative to the Charlson and Elixhauser methods in predicting in-hospital and six-month mortality. The CMS-HCC model is preferred over the Charlson and Elixhauser methods if information about the patient's diagnoses prior to the index hospitalization is available and used to code the risk adjusters. However, caution should be exercised in studies evaluating inpatient processes of care and where data on pre-index admission diagnoses are unavailable.
Liao, S; Mei, J; Song, W; Liu, Y; Tan, Y-D; Chi, S; Li, P; Chen, X; Deng, S
2014-03-01
The International Association of Diabetes and Pregnancy Study Groups (IADPSG) proposed that a one-time value of fasting plasma glucose of 5.1 mmol/l or over at any time of the pregnancy is sufficient to diagnose gestational diabetes. We evaluated the repercussions of the application of this threshold in pregnant Han Chinese women. This is a retrospective study of 5360 (72.3% of total) consecutively recruited pregnant Han Chinese women in one centre from 2008 to 2011. These women underwent a two-step gestational diabetes diagnostic protocol according to the previous American Diabetes Association criteria. The IADPSG fasting plasma glucose criterion was used to reclassify these 5360 women. The prevalence, clinical characteristics and obstetric outcomes were compared among the women classified as having gestational diabetes by the previous American Diabetes Association criteria (approximately 90% were treated), those reclassified as having gestational diabetes by the single IADPSG fasting plasma glucose criterion (untreated), but not as having gestational diabetes by the previous American Diabetes Association criteria, and those with normal glucose tolerance. There were 626 cases of gestational diabetes defined by the previous American Diabetes Association criteria (11.7%) and these cases were associated with increased risks of maternal and neonatal outcomes when compared with the women with normal glucose tolerance. With the IADPSG fasting plasma glucose criterion, another 1314 (24.5%) women were reclassified as having gestational diabetes. Gestational diabetes classified by the IADPSG fasting plasma glucose criterion was associated with gestational hypertension (P = 0.0094) and neonatal admission to nursery (P = 0.035) prior to adjustment for maternal age and BMI, but was no longer a predictor for adverse pregnancy outcomes after adjustment. The simple IADPSG fasting plasma glucose criterion increased the Chinese population with gestational diabetes by 200%. The increased population with gestational diabetes was not significantly associated with excess obstetric and neonatal morbidity. © 2013 The Authors. Diabetic Medicine © 2013 Diabetes UK.
Guo, Lei; Li, Zhengyan; Gao, Pei; Hu, Hong; Gibson, Mark
2015-11-01
Bisphenol A (BPA) occurs widely in natural waters with both traditional and reproductive toxicity to various aquatic species. The water quality criteria (WQC), however, have not been established in China, which hinders the ecological risk assessment for the pollutant. This study therefore aims to derive the water quality criteria for BPA based on both acute and chronic toxicity endpoints and to assess the ecological risk in surface waters of China. A total of 15 acute toxicity values tested with aquatic species resident in China were found in published literature, which were simulated with the species sensitivity distribution (SSD) model for the derivation of criterion maximum concentration (CMC). 18 chronic toxicity values with traditional endpoints were simulated for the derivation of traditional criterion continuous concentration (CCC) and 12 chronic toxicity values with reproductive endpoints were for reproductive CCC. Based on the derived WQC, the ecological risk of BPA in surface waters of China was assessed with risk quotient (RQ) method. The results showed that the CMC, traditional CCC and reproductive CCC were 1518μgL(-1), 2.19μgL(-1) and 0.86μgL(-1), respectively. The acute risk of BPA was negligible with RQ values much lower than 0.1. The chronic risk was however much higher with RQ values of between 0.01-3.76 and 0.03-9.57 based on traditional and reproductive CCC, respectively. The chronic RQ values on reproductive endpoints were about threefold as high as those on traditional endpoints, indicating that ecological risk assessment based on traditional effects may not guarantee the safety of aquatic biota. Copyright © 2015 Elsevier Ltd. All rights reserved.
Ranking streamflow model performance based on Information theory metrics
NASA Astrophysics Data System (ADS)
Martinez, Gonzalo; Pachepsky, Yakov; Pan, Feng; Wagener, Thorsten; Nicholson, Thomas
2016-04-01
The accuracy-based model performance metrics not necessarily reflect the qualitative correspondence between simulated and measured streamflow time series. The objective of this work was to use the information theory-based metrics to see whether they can be used as complementary tool for hydrologic model evaluation and selection. We simulated 10-year streamflow time series in five watersheds located in Texas, North Carolina, Mississippi, and West Virginia. Eight model of different complexity were applied. The information-theory based metrics were obtained after representing the time series as strings of symbols where different symbols corresponded to different quantiles of the probability distribution of streamflow. The symbol alphabet was used. Three metrics were computed for those strings - mean information gain that measures the randomness of the signal, effective measure complexity that characterizes predictability and fluctuation complexity that characterizes the presence of a pattern in the signal. The observed streamflow time series has smaller information content and larger complexity metrics than the precipitation time series. Watersheds served as information filters and and streamflow time series were less random and more complex than the ones of precipitation. This is reflected the fact that the watershed acts as the information filter in the hydrologic conversion process from precipitation to streamflow. The Nash Sutcliffe efficiency metric increased as the complexity of models increased, but in many cases several model had this efficiency values not statistically significant from each other. In such cases, ranking models by the closeness of the information-theory based parameters in simulated and measured streamflow time series can provide an additional criterion for the evaluation of hydrologic model performance.
Quantifying Human Movement Using the Movn Smartphone App: Validation and Field Study
2017-01-01
Background The use of embedded smartphone sensors offers opportunities to measure physical activity (PA) and human movement. Big data—which includes billions of digital traces—offers scientists a new lens to examine PA in fine-grained detail and allows us to track people’s geocoded movement patterns to determine their interaction with the environment. Objective The objective of this study was to examine the validity of the Movn smartphone app (Moving Analytics) for collecting PA and human movement data. Methods The criterion and convergent validity of the Movn smartphone app for estimating energy expenditure (EE) were assessed in both laboratory and free-living settings, compared with indirect calorimetry (criterion reference) and a stand-alone accelerometer that is commonly used in PA research (GT1m, ActiGraph Corp, convergent reference). A supporting cross-validation study assessed the consistency of activity data when collected across different smartphone devices. Global positioning system (GPS) and accelerometer data were integrated with geographical information software to demonstrate the feasibility of geospatial analysis of human movement. Results A total of 21 participants contributed to linear regression analysis to estimate EE from Movn activity counts (standard error of estimation [SEE]=1.94 kcal/min). The equation was cross-validated in an independent sample (N=42, SEE=1.10 kcal/min). During laboratory-based treadmill exercise, EE from Movn was comparable to calorimetry (bias=0.36 [−0.07 to 0.78] kcal/min, t82=1.66, P=.10) but overestimated as compared with the ActiGraph accelerometer (bias=0.93 [0.58-1.29] kcal/min, t89=5.27, P<.001). The absolute magnitude of criterion biases increased as a function of locomotive speed (F1,4=7.54, P<.001) but was relatively consistent for the convergent comparison (F1,4=1.26, P<.29). Furthermore, 95% limits of agreement were consistent for criterion and convergent biases, and EE from Movn was strongly correlated with both reference measures (criterion r=.91, convergent r=.92, both P<.001). Movn overestimated EE during free-living activities (bias=1.00 [0.98-1.02] kcal/min, t6123=101.49, P<.001), and biases were larger during high-intensity activities (F3,6120=1550.51, P<.001). In addition, 95% limits of agreement for convergent biases were heterogeneous across free-living activity intensity levels, but Movn and ActiGraph measures were strongly correlated (r=.87, P<.001). Integration of GPS and accelerometer data within a geographic information system (GIS) enabled creation of individual temporospatial maps. Conclusions The Movn smartphone app can provide valid passive measurement of EE and can enrich these data with contextualizing temporospatial information. Although enhanced understanding of geographic and temporal variation in human movement patterns could inform intervention development, it also presents challenges for data processing and analytics. PMID:28818819
Methods for automatic trigger threshold adjustment
Welch, Benjamin J; Partridge, Michael E
2014-03-18
Methods are presented for adjusting trigger threshold values to compensate for drift in the quiescent level of a signal monitored for initiating a data recording event, thereby avoiding false triggering conditions. Initial threshold values are periodically adjusted by re-measuring the quiescent signal level, and adjusting the threshold values by an offset computation based upon the measured quiescent signal level drift. Re-computation of the trigger threshold values can be implemented on time based or counter based criteria. Additionally, a qualification width counter can be utilized to implement a requirement that a trigger threshold criterion be met a given number of times prior to initiating a data recording event, further reducing the possibility of a false triggering situation.
[Examination of the criterion validity of the MMPI-2 Depression, Anxiety, and Anger Content scales].
Uluç, Sait
2008-01-01
Examination of the psychometric properties and content areas of the revised MMPI's (MMPI-2 [Minnesota Multiphasic Personality Inventory-2]) content scales is required. In this study the criterion-related validity of the MMPI-2 Depression, Anxiety, and Anger Content scales was examined using the following conceptually relevant scales: The Beck Depression Inventory (BDI), Beck Anxiety Inventory (BAI), and State Triad Anger Scale (STAS). MMPI-2 Depression, Anxiety, and Anger Content scales, and BDI, BAI, and STAS were administered to a sample of 196 students at Middle East Technical University (n= 196; 122 female, 74 male). Regression analyses were performed to determine if these conceptually relevant scales contributed significantly beyond the content scales. The MMPI-2 Depression Content Scale was compared to BDI, the MMPI-2 Anxiety Scale was compared to BAI, and the MMPI-2 Anger Content Scale was compared to STAS. The internal consistency of the MMPI-2 Depression Content Scale (alpha = 0.82), the MMPI-2 Anxiety Content Scale (alpha = 0.73), and the MMPI-2 Anger Content Scale (alpha = 0.72) was obtained. Criterion validity of the 3 analyzed content scales was demonstrated for both males and females. The findings indicated that (1) the MMPI-2 Depression Content Scale provides information about the general level of depression, (2) the MMPI-2 Anxiety Content Scale assesses subjective anxiety rather than somatic anxiety, and (3) the MMPI-2 Anger Content Scale may provide information about the potential to act out. The findings also provide further evidence that the 3 conceptually relevant scales aid in the interpretation of MMPI-2 scores by contributing additional information beyond the clinical scales.
Increasing reconstruction quality of diffractive optical elements displayed with LC SLM
NASA Astrophysics Data System (ADS)
Cheremkhin, Pavel A.; Evtikhiev, Nikolay N.; Krasnov, Vitaly V.; Rodin, Vladislav G.; Starikov, Sergey N.
2015-03-01
Phase liquid crystal (LC) spatial light modulators (SLM) are actively used in various applications. However, majority of scientific applications require stable phase modulation which might be hard to achieve with commercially available SLM due to its consumer origin. The use of digital voltage addressing scheme leads to phase temporal fluctuations, which results in lower diffraction efficiency and reconstruction quality of displayed diffractive optical elements (DOE). Due to high periodicity of fluctuations it should be possible to use knowledge of these fluctuations during DOE synthesis to minimize negative effect. We synthesized DOE using accurately measured phase fluctuations of phase LC SLM "HoloEye PLUTO VIS" to minimize its negative impact on displayed DOE reconstruction. Synthesis was conducted with versatile direct search with random trajectory (DSRT) method in the following way. Before DOE synthesis begun, two-dimensional dependency of SLM phase shift on addressed signal level and time from frame start was obtained. Then synthesis begins. First, initial phase distribution is created. Second, random trajectory of consecutive processing of all DOE elements is generated. Then iterative process begins. Each DOE element sequentially has its value changed to one that provides better value of objective criterion, e.g. lower deviation of reconstructed image from original one. If current element value provides best objective criterion value then it left unchanged. After all elements are processed, iteration repeats until stagnation is reached. It is demonstrated that application of SLM phase fluctuations knowledge in DOE synthesis with DSRT method leads to noticeable increase of DOE reconstruction quality.
Optimization of equivalent uniform dose using the L-curve criterion.
Chvetsov, Alexei V; Dempsey, James F; Palta, Jatinder R
2007-10-07
Optimization of equivalent uniform dose (EUD) in inverse planning for intensity-modulated radiation therapy (IMRT) prevents variation in radiobiological effect between different radiotherapy treatment plans, which is due to variation in the pattern of dose nonuniformity. For instance, the survival fraction of clonogens would be consistent with the prescription when the optimized EUD is equal to the prescribed EUD. One of the problems in the practical implementation of this approach is that the spatial dose distribution in EUD-based inverse planning would be underdetermined because an unlimited number of nonuniform dose distributions can be computed for a prescribed value of EUD. Together with ill-posedness of the underlying integral equation, this may significantly increase the dose nonuniformity. To optimize EUD and keep dose nonuniformity within reasonable limits, we implemented into an EUD-based objective function an additional criterion which ensures the smoothness of beam intensity functions. This approach is similar to the variational regularization technique which was previously studied for the dose-based least-squares optimization. We show that the variational regularization together with the L-curve criterion for the regularization parameter can significantly reduce dose nonuniformity in EUD-based inverse planning.
Color filter array design based on a human visual model
NASA Astrophysics Data System (ADS)
Parmar, Manu; Reeves, Stanley J.
2004-05-01
To reduce cost and complexity associated with registering multiple color sensors, most consumer digital color cameras employ a single sensor. A mosaic of color filters is overlaid on a sensor array such that only one color channel is sampled per pixel location. The missing color values must be reconstructed from available data before the image is displayed. The quality of the reconstructed image depends fundamentally on the array pattern and the reconstruction technique. We present a design method for color filter array patterns that use red, green, and blue color channels in an RGB array. A model of the human visual response for luminance and opponent chrominance channels is used to characterize the perceptual error between a fully sampled and a reconstructed sparsely-sampled image. Demosaicking is accomplished using Wiener reconstruction. To ensure that the error criterion reflects perceptual effects, reconstruction is done in a perceptually uniform color space. A sequential backward selection algorithm is used to optimize the error criterion to obtain the sampling arrangement. Two different types of array patterns are designed: non-periodic and periodic arrays. The resulting array patterns outperform commonly used color filter arrays in terms of the error criterion.
Heidarkhan Tehrani, Ashkan; Zadhoush, Ali; Karbasi, Saeed; Sadeghi-Aliabadi, Hojjat
2010-11-01
Fibrous scaffolds of engineered structures can be chosen as promising porous environments when an approved criterion validates their applicability for a specific medical purpose. For such biomaterials, this paper sought to investigate various structural characteristics in order to determine whether they are appropriate descriptors. A number of poly(3-hydroxybutyrate) scaffolds were electrospun; each of which possessed a distinguished architecture when their material and processing conditions were altered. Subsequent culture of mouse fibroblast cells (L929) was carried out to evaluate the cells viability on each scaffold after their attachment for 24 h and proliferation for 48 and 72 h. The scaffolds' porosity, pores number, pores size and distribution were quantified and none could establish a relationship with the viability results. Virtual reconstruction of the mats introduced an authentic criterion, "Scaffold Percolative Efficiency" (SPE), with which the above descriptors were addressed collectively. It was hypothesized to be able to quantify the efficacy of fibrous scaffolds by considering the integration of porosity and interconnectivity of the pores. There was a correlation of 80% as a good agreement between the SPE values and the spectrophotometer absorbance of viable cells; a viability of more than 350% in comparison to that of the controls.
Azeez, Adeboye; Obaromi, Davies; Odeyemi, Akinwumi; Ndege, James; Muntabayi, Ruffin
2016-07-26
Tuberculosis (TB) is a deadly infectious disease caused by Mycobacteria tuberculosis. Tuberculosis as a chronic and highly infectious disease is prevalent in almost every part of the globe. More than 95% of TB mortality occurs in low/middle income countries. In 2014, approximately 10 million people were diagnosed with active TB and two million died from the disease. In this study, our aim is to compare the predictive powers of the seasonal autoregressive integrated moving average (SARIMA) and neural network auto-regression (SARIMA-NNAR) models of TB incidence and analyse its seasonality in South Africa. TB incidence cases data from January 2010 to December 2015 were extracted from the Eastern Cape Health facility report of the electronic Tuberculosis Register (ERT.Net). A SARIMA model and a combined model of SARIMA model and a neural network auto-regression (SARIMA-NNAR) model were used in analysing and predicting the TB data from 2010 to 2015. Simulation performance parameters of mean square error (MSE), root mean square error (RMSE), mean absolute error (MAE), mean percent error (MPE), mean absolute scaled error (MASE) and mean absolute percentage error (MAPE) were applied to assess the better performance of prediction between the models. Though practically, both models could predict TB incidence, the combined model displayed better performance. For the combined model, the Akaike information criterion (AIC), second-order AIC (AICc) and Bayesian information criterion (BIC) are 288.56, 308.31 and 299.09 respectively, which were lower than the SARIMA model with corresponding values of 329.02, 327.20 and 341.99, respectively. The seasonality trend of TB incidence was forecast to have a slightly increased seasonal TB incidence trend from the SARIMA-NNAR model compared to the single model. The combined model indicated a better TB incidence forecasting with a lower AICc. The model also indicates the need for resolute intervention to reduce infectious disease transmission with co-infection with HIV and other concomitant diseases, and also at festival peak periods.
Bakhshandeh, Mohsen; Hashemi, Bijan; Mahdavi, Seied Rabi Mehdi; Nikoofar, Alireza; Vasheghani, Maryam; Kazemnejad, Anoshirvan
2013-02-01
To determine the dose-response relationship of the thyroid for radiation-induced hypothyroidism in head-and-neck radiation therapy, according to 6 normal tissue complication probability models, and to find the best-fit parameters of the models. Sixty-five patients treated with primary or postoperative radiation therapy for various cancers in the head-and-neck region were prospectively evaluated. Patient serum samples (tri-iodothyronine, thyroxine, thyroid-stimulating hormone [TSH], free tri-iodothyronine, and free thyroxine) were measured before and at regular time intervals until 1 year after the completion of radiation therapy. Dose-volume histograms (DVHs) of the patients' thyroid gland were derived from their computed tomography (CT)-based treatment planning data. Hypothyroidism was defined as increased TSH (subclinical hypothyroidism) or increased TSH in combination with decreased free thyroxine and thyroxine (clinical hypothyroidism). Thyroid DVHs were converted to 2 Gy/fraction equivalent doses using the linear-quadratic formula with α/β = 3 Gy. The evaluated models included the following: Lyman with the DVH reduced to the equivalent uniform dose (EUD), known as LEUD; Logit-EUD; mean dose; relative seriality; individual critical volume; and population critical volume models. The parameters of the models were obtained by fitting the patients' data using a maximum likelihood analysis method. The goodness of fit of the models was determined by the 2-sample Kolmogorov-Smirnov test. Ranking of the models was made according to Akaike's information criterion. Twenty-nine patients (44.6%) experienced hypothyroidism. None of the models was rejected according to the evaluation of the goodness of fit. The mean dose model was ranked as the best model on the basis of its Akaike's information criterion value. The D(50) estimated from the models was approximately 44 Gy. The implemented normal tissue complication probability models showed a parallel architecture for the thyroid. The mean dose model can be used as the best model to describe the dose-response relationship for hypothyroidism complication. Copyright © 2013 Elsevier Inc. All rights reserved.
Rahman, Md Rejaur; Shi, Z H; Chongfa, Cai
2014-11-01
This study was an attempt to analyse the regional environmental quality with the application of remote sensing, geographical information system, and spatial multiple criteria decision analysis and, to project a quantitative method applicable to identify the status of the regional environment of the study area. Using spatial multi-criteria evaluation (SMCE) approach with expert knowledge in this study, an integrated regional environmental quality index (REQI) was computed and classified into five levels of regional environment quality viz. worse, poor, moderate, good, and very good. During the process, a set of spatial criteria were selected (here, 15 criterions) together with the degree of importance of criteria in sustainability of the regional environment. Integrated remote sensing and GIS technique and models were applied to generate the necessary factors (criterions) maps for the SMCE approach. The ranking, along with expected value method, was used to standardize the factors and on the other hand, an analytical hierarchy process (AHP) was applied for calculating factor weights. The entire process was executed in the integrated land and water information system (ILWIS) software tool that supports SMCE. The analysis showed that the overall regional environmental quality of the area was at moderate level and was partly determined by elevation. Areas under worse and poor quality of environment indicated that the regional environmental status showed decline in these parts of the county. The study also revealed that the human activities, vegetation condition, soil erosion, topography, climate, and soil conditions have serious influence on the regional environment condition of the area. Considering the regional characteristics of environmental quality, priority, and practical needs for environmental restoration, the study area was further regionalized into four priority areas which may serve as base areas of decision making for the recovery, rebuilding, and protection of the environment.
Saucedo-Reyes, Daniela; Carrillo-Salazar, José A; Román-Padilla, Lizbeth; Saucedo-Veloz, Crescenciano; Reyes-Santamaría, María I; Ramírez-Gilly, Mariana; Tecante, Alberto
2018-03-01
High hydrostatic pressure inactivation kinetics of Escherichia coli ATCC 25922 and Salmonella enterica subsp. enterica serovar Typhimurium ATCC 14028 ( S. typhimurium) in a low acid mamey pulp at four pressure levels (300, 350, 400, and 450 MPa), different exposure times (0-8 min), and temperature of 25 ± 2℃ were obtained. Survival curves showed deviations from linearity in the form of a tail (upward concavity). The primary models tested were the Weibull model, the modified Gompertz equation, and the biphasic model. The Weibull model gave the best goodness of fit ( R 2 adj > 0.956, root mean square error < 0.290) in the modeling and the lowest Akaike information criterion value. Exponential-logistic and exponential decay models, and Bigelow-type and an empirical models for b'( P) and n( P) parameters, respectively, were tested as alternative secondary models. The process validation considered the two- and one-step nonlinear regressions for making predictions of the survival fraction; both regression types provided an adequate goodness of fit and the one-step nonlinear regression clearly reduced fitting errors. The best candidate model according to the Akaike theory information, with better accuracy and more reliable predictions was the Weibull model integrated by the exponential-logistic and exponential decay secondary models as a function of time and pressure (two-step procedure) or incorporated as one equation (one-step procedure). Both mathematical expressions were used to determine the t d parameter, where the desired reductions ( 5D) (considering d = 5 ( t 5 ) as the criterion of 5 Log 10 reduction (5 D)) in both microorganisms are attainable at 400 MPa for 5.487 ± 0.488 or 5.950 ± 0.329 min, respectively, for the one- or two-step nonlinear procedure.
Thongseiratch, Therdpong; Worachotekamjorn, Juthamas
2016-10-01
This study compared the number of attention deficit hyperactivity disorder (ADHD) cases defined by Diagnostic and Statistical Manual (DSM)-IV versus DSM-V criterion in children who have learning or behavioral problems with high IQ. The medical records of children ≤15 years of age who presented with learning or behavioral problems and underwent a Wechsler Intelligence Scale for Children (WISC)-III IQ test at the Pediatric Outpatient Clinic unit between 2010 and 2015 were reviewed. Information on DSM-IV and DSM-V criteria for ADHD were derived from computer-based medical records. Twenty-eight children who had learning or behavioral problems were identified to have a full-scale IQ ≥120. Sixteen of these high-IQ children met the DSM-IV criteria diagnosis for ADHD. Applying the extension of the age-of-onset criterion from 7 to 12 years in DSM-V led to an increase of three cases, all of which were the inattentive type ADHD. Including the pervasive developmental disorder criterion led to an increase of one case. The total number of ADHD cases also increased from 16 to 20 in this group. The data supported the hypothesis that applying the extension of the age-of-onset ADHD criterion and enabling the diagnosis of children with pervasive developmental disorders will increase the number of ADHD diagnoses among children with high IQ. © The Author(s) 2016.
Error Consistency in Acquired Apraxia of Speech with Aphasia: Effects of the Analysis Unit
ERIC Educational Resources Information Center
Haley, Katarina L.; Cunningham, Kevin T.; Eaton, Catherine Torrington; Jacks, Adam
2018-01-01
Purpose: Diagnostic recommendations for acquired apraxia of speech (AOS) have been contradictory concerning whether speech sound errors are consistent or variable. Studies have reported divergent findings that, on face value, could argue either for or against error consistency as a diagnostic criterion. The purpose of this study was to explain…
Swedish PE Teachers' Understandings of Legitimate Movement in a Criterion-Referenced Grading System
ERIC Educational Resources Information Center
Svennberg, Lena
2017-01-01
Background: Physical Education (PE) has been associated with a multi-activity model in which movement is related to sport discourses and sport techniques. However, as in many international contexts, the Swedish national PE syllabus calls for a wider and more inclusive concept of movement. Complex movement adapted to different settings is valued,…
NASA Technical Reports Server (NTRS)
Geering, H. P.; Athans, M.
1973-01-01
A complete theory of necessary and sufficient conditions is discussed for a control to be superior with respect to a nonscalar-valued performance criterion. The latter maps into a finite dimensional, integrally closed directed, partially ordered linear space. The applicability of the theory to the analysis of dynamic vector estimation problems and to a class of uncertain optimal control problems is demonstrated.
ERIC Educational Resources Information Center
Schimmel, Tammy; Johnston, Pattie C.; Stasio, Mike
2013-01-01
The professoriate has been debating the value of adding collegiality as a fourth criterion in faculty evaluations. Collegiality is considered to be any extra-role behavior that represents individuals' behavior that is discretionary, not recognized by the formal reward system and that, in the aggregate, promotes the effective functioning of the…
Susceptibility Breakpoint for Enrofloxacin against Swine Salmonella spp.
Hao, Haihong; Pan, Huafang; Ahmad, Ijaz; Cheng, Guyue; Wang, Yulian; Dai, Menghong; Tao, Yanfei; Chen, Dongmei; Peng, Dapeng; Liu, Zhenli
2013-01-01
Susceptibility breakpoints are crucial for prudent use of antimicrobials. This study has developed the first susceptibility breakpoint (MIC ≤ 0.25 μg/ml) for enrofloxacin against swine Salmonella spp. based on wild-type cutoff (COWT) and pharmacokinetic-pharmacodynamic (PK-PD) cutoff (COPD) values, consequently providing a criterion for susceptibility testing and clinical usage of enrofloxacin. PMID:23784134
Convergent, discriminant, and criterion validity of DSM-5 traits.
Yalch, Matthew M; Hopwood, Christopher J
2016-10-01
Section III of the Diagnostic and Statistical Manual of Mental Disorders (5th edi.; DSM-5; American Psychiatric Association, 2013) contains a system for diagnosing personality disorder based in part on assessing 25 maladaptive traits. Initial research suggests that this aspect of the system improves the validity and clinical utility of the Section II Model. The Computer Adaptive Test of Personality Disorder (CAT-PD; Simms et al., 2011) contains many similar traits as the DSM-5, as well as several additional traits seemingly not covered in the DSM-5. In this study we evaluate the convergent and discriminant validity between the DSM-5 traits, as assessed by the Personality Inventory for DSM-5 (PID-5; Krueger et al., 2012), and CAT-PD in an undergraduate sample, and test whether traits included in the CAT-PD but not the DSM-5 provide incremental validity in association with clinically relevant criterion variables. Results supported the convergent and discriminant validity of the PID-5 and CAT-PD scales in their assessment of 23 out of 25 DSM-5 traits. DSM-5 traits were consistently associated with 11 criterion variables, despite our having intentionally selected clinically relevant criterion constructs not directly assessed by DSM-5 traits. However, the additional CAT-PD traits provided incremental information above and beyond the DSM-5 traits for all criterion variables examined. These findings support the validity of pathological trait models in general and the DSM-5 and CAT-PD models in particular, while also suggesting that the CAT-PD may include additional traits for consideration in future iterations of the DSM-5 system. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Why noise is useful in functional and neural mechanisms of interval timing?
2013-01-01
Background The ability to estimate durations in the seconds-to-minutes range - interval timing - is essential for survival, adaptation and its impairment leads to severe cognitive and/or motor dysfunctions. The response rate near a memorized duration has a Gaussian shape centered on the to-be-timed interval (criterion time). The width of the Gaussian-like distribution of responses increases linearly with the criterion time, i.e., interval timing obeys the scalar property. Results We presented analytical and numerical results based on the striatal beat frequency (SBF) model showing that parameter variability (noise) mimics behavioral data. A key functional block of the SBF model is the set of oscillators that provide the time base for the entire timing network. The implementation of the oscillators block as simplified phase (cosine) oscillators has the additional advantage that is analytically tractable. We also checked numerically that the scalar property emerges in the presence of memory variability by using biophysically realistic Morris-Lecar oscillators. First, we predicted analytically and tested numerically that in a noise-free SBF model the output function could be approximated by a Gaussian. However, in a noise-free SBF model the width of the Gaussian envelope is independent of the criterion time, which violates the scalar property. We showed analytically and verified numerically that small fluctuations of the memorized criterion time leads to scalar property of interval timing. Conclusions Noise is ubiquitous in the form of small fluctuations of intrinsic frequencies of the neural oscillators, the errors in recording/retrieving stored information related to criterion time, fluctuation in neurotransmitters’ concentration, etc. Our model suggests that the biological noise plays an essential functional role in the SBF interval timing. PMID:23924391
Entropic no-disturbance as a physical principle
NASA Astrophysics Data System (ADS)
Jia, Zhih-Ahn; Zhai, Rui; Yu, Bai-Chu; Wu, Yu-Chun; Guo, Guang-Can
2018-05-01
The celebrated Bell-Kochen-Specker no-go theorem asserts that quantum mechanics does not present the property of realism; the essence of the theorem is the lack of a joint probability distribution for some experiment settings. We exploit the information theoretic form of the theorem using information measure instead of probabilistic measure and indicate that quantum mechanics does not present such kind of entropic realism either. The entropic form of Gleason's no-disturbance principle is developed and characterized by the intersection of several entropic cones. Entropic contextuality and entropic nonlocality are investigated in depth in this framework as well. We show how one can construct monogamy relations using entropic cone and basic Shannon-type inequalities. The general criterion for several entropic tests to be monogamous is also developed; using the criterion, we demonstrate that entropic nonlocal correlations, entropic contextuality tests, and entropic nonlocality and entropic contextuality are monogamous. Finally, we analyze the entropic monogamy relations for the multiparty and many-test case, which may play a crucial role in quantum network communication.
Testing the Distance-Duality Relation in the Rh = ct Universe
NASA Astrophysics Data System (ADS)
Hu, J.; Wang, F. Y.
2018-04-01
In this paper, we test the cosmic distance duality (CDD) relation using the luminosity distances from joint light-curve analysis (JLA) type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and ΛCDM models are considered. In order to compare the two models, we constrain the CCD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0 = 67.8 ± 0.9 km/s/Mpc. However, the CDD relation is valid at 3σ confidence level with H0 = 73.45 ± 1.66 km/s/Mpc. Using the Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC), we find that the ΛCDM model is very strongly preferred over the Rh = ct model with these data sets for the CDD relation test.
Testing the distance-duality relation in the Rh = ct universe
NASA Astrophysics Data System (ADS)
Hu, J.; Wang, F. Y.
2018-07-01
In this paper, we test the cosmic distance-duality (CDD) relation using the luminosity distances from joint light-curve analysis Type Ia supernovae (SNe Ia) sample and angular diameter distance sample from galaxy clusters. The Rh = ct and Λ cold dark matter (CDM) models are considered. In order to compare the two models, we constrain the CDD relation and the SNe Ia light-curve parameters simultaneously. Considering the effects of Hubble constant, we find that η ≡ DA(1 + z)2/DL = 1 is valid at the 2σ confidence level in both models with H0= 67.8 ± 0.9 km -1s-1 Mpc. However, the CDD relation is valid at 3σ confidence level with H0= 73.45 ± 1.66 km -1s-1Mpc. Using the Akaike Information Criterion and the Bayesian Information Criterion, we find that the ΛCDM model is very stongly preferred over the Rh = ct model with these data sets for the CDD relation test.
Tefferi, Ayalew; Gangat, Naseema; Mudireddy, Mythri; Lasho, Terra L; Finke, Christy; Begna, Kebede H; Elliott, Michelle A; Al-Kali, Aref; Litzow, Mark R; Hook, C Christopher; Wolanskyj, Alexandra P; Hogan, William J; Patnaik, Mrinal M; Pardanani, Animesh; Zblewski, Darci L; He, Rong; Viswanatha, David; Hanson, Curtis A; Ketterling, Rhett P; Tang, Jih-Luh; Chou, Wen-Chien; Lin, Chien-Chin; Tsai, Cheng-Hong; Tien, Hwei-Fang; Hou, Hsin-An
2018-06-01
To develop a new risk model for primary myelodysplastic syndromes (MDS) that integrates information on mutations, karyotype, and clinical variables. Patients with World Health Organization-defined primary MDS seen at Mayo Clinic (MC) from December 28, 1994, through December 19, 2017, constituted the core study group. The National Taiwan University Hospital (NTUH) provided the validation cohort. Model performance, compared with the revised International Prognostic Scoring System, was assessed by Akaike information criterion and area under the curve estimates. The study group consisted of 685 molecularly annotated patients from MC (357) and NTUH (328). Multivariate analysis of the MC cohort identified monosomal karyotype (hazard ratio [HR], 5.2; 95% CI, 3.1-8.6), "non-MK abnormalities other than single/double del(5q)" (HR, 1.8; 95% CI, 1.3-2.6), RUNX1 (HR, 2.0; 95% CI, 1.2-3.1) and ASXL1 (HR, 1.7; 95% CI, 1.2-2.3) mutations, absence of SF3B1 mutations (HR, 1.6; 95% CI, 1.1-2.4), age greater than 70 years (HR, 2.2; 95% CI, 1.6-3.1), hemoglobin level less than 8 g/dL in women or less than 9 g/dL in men (HR, 2.3; 95% CI, 1.7-3.1), platelet count less than 75 × 10 9 /L (HR, 1.5; 95% CI, 1.1-2.1), and 10% or more bone marrow blasts (HR, 1.7; 95% CI, 1.1-2.8) as predictors of inferior overall survival. Based on HR-weighted risk scores, a 4-tiered Mayo alliance prognostic model for MDS was devised: low (89 patients), intermediate-1 (104), intermediate-2 (95), and high (69); respective median survivals (5-year overall survival rates) were 85 (73%), 42 (34%), 22 (7%), and 9 months (0%). The Mayo alliance model was subsequently validated by using the external NTUH cohort and, compared with the revised International Prognostic Scoring System, displayed favorable Akaike information criterion (1865 vs 1943) and area under the curve (0.87 vs 0.76) values. We propose a simple and contemporary risk model for MDS that is based on a limited set of genetic and clinical variables. Copyright © 2018. Published by Elsevier Inc.
Gary D. Grossman; Robert E. Ratajczak; C. Michael Wagner; J. Todd Petty
2010-01-01
1. We used information theoretic statistics [Akaikeâs Information Criterion (AIC)] and regression analysis in a multiple hypothesis testing approach to assess the processes capable of explaining long-term demographic variation in a lightly exploited brook trout population in Ball Creek, NC. We sampled a 100-m-long second-order site during both spring and autumn 1991â...
ERIC Educational Resources Information Center
Jones, Tom; Di Salvo, Vince
A computerized content analysis of the "theory input" for a basic speech course was conducted. The questions to be answered were (1) What does the inexperienced basic speech student hold as a conceptual perspective of the "speech to inform" prior to his being subjected to a college speech class? and (2) How does that inexperienced student's…