ERIC Educational Resources Information Center
Wyse, Adam E.; Seo, Dong Gi
2014-01-01
This article provides a brief overview and comparison of three conditional growth percentile methods; student growth percentiles, percentile rank residuals, and a nonparametric matching method. These approaches seek to describe student growth in terms of the relative percentile ranking of a student in relationship to students that had the same…
Tutorial: Calculating Percentile Rank and Percentile Norms Using SPSS
ERIC Educational Resources Information Center
Baumgartner, Ted A.
2009-01-01
Practitioners can benefit from using norms, but they often have to develop their own percentile rank and percentile norms. This article is a tutorial on how to quickly and easily calculate percentile rank and percentile norms using SPSS, and this information is presented for a data set. Some issues in calculating percentile rank and percentile…
Percentile Ranking and Citation Impact of a Large Cohort of NHLBI-funded Cardiovascular R01 Grants
Danthi, Narasimhan; Wu, Colin O.; Shi, Peibei; Lauer, Michael
2014-01-01
Rationale Funding decisions for cardiovascular R01 grant applications at NHLBI largely hinge on percentile rankings. It is not known whether this approach enables the highest impact science. Objective To conduct an observational analysis of percentile rankings and bibliometric outcomes for a contemporary set of funded NHLBI cardiovascular R01 grants. Methods and results We identified 1492 investigator-initiated de novo R01 grant applications that were funded between 2001 and 2008, and followed their progress for linked publications and citations to those publications. Our co-primary endpoints were citations received per million dollars of funding, citations obtained within 2-years of publication, and 2-year citations for each grant’s maximally cited paper. In 7654 grant-years of funding that generated $3004 million of total NIH awards, the portfolio yielded 16,793 publications that appeared between 2001 and 2012 (median per grant 8, 25th and 75th percentiles 4 and 14, range 0 – 123), which received 2,224,255 citations (median per grant 1048, 25th and 75th percentiles 492 and 1,932, range 0 – 16,295). We found no association between percentile ranking and citation metrics; the absence of association persisted even after accounting for calendar time, grant duration, number of grants acknowledged per paper, number of authors per paper, early investigator status, human versus non-human focus, and institutional funding. An exploratory machine-learning analysis suggested that grants with the very best percentile rankings did yield more maximally cited papers. Conclusions In a large cohort of NHLBI-funded cardiovascular R01 grants, we were unable to find a monotonic association between better percentile ranking and higher scientific impact as assessed by citation metrics. PMID:24406983
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
Recurrent fuzzy ranking methods
NASA Astrophysics Data System (ADS)
Hajjari, Tayebeh
2012-11-01
With the increasing development of fuzzy set theory in various scientific fields and the need to compare fuzzy numbers in different areas. Therefore, Ranking of fuzzy numbers plays a very important role in linguistic decision-making, engineering, business and some other fuzzy application systems. Several strategies have been proposed for ranking of fuzzy numbers. Each of these techniques has been shown to produce non-intuitive results in certain case. In this paper, we reviewed some recent ranking methods, which will be useful for the researchers who are interested in this area.
Attempt for percentile analysis of food colorants with photoacoustic method
NASA Astrophysics Data System (ADS)
Coelho, T. M.; Vidotti, E. C.; Rollemberg, M. C. E.; Baesso, M. L.; Bento, A. C.
2005-06-01
In this work the photoacoustic (PAS) method is applied in polyester-type polyurethane foam (PUF) doped with food colorants. Aiming to resolve binary mixtures of synthetic colorants such as Sunset Yellow, Tartrazine, Brilliant Blue and Amaranth, a single spectroscopic method is described. Based upon individual spectra, a Gaussian deconvolution is used and the fraction of each colorant is found.
Statewide Analysis of the Drainage-Area Ratio Method for 34 Streamflow Percentile Ranges in Texas
Asquith, William H.; Roussel, Meghan C.; Vrabel, Joseph
2006-01-01
The drainage-area ratio method commonly is used to estimate streamflow for sites where no streamflow data are available using data from one or more nearby streamflow-gaging stations. The method is intuitive and straightforward to implement and is in widespread use by analysts and managers of surface-water resources. The method equates the ratio of streamflow at two stream locations to the ratio of the respective drainage areas. In practice, unity often is assumed as the exponent on the drainage-area ratio, and unity also is assumed as a multiplicative bias correction. These two assumptions are evaluated in this investigation through statewide analysis of daily mean streamflow in Texas. The investigation was made by the U.S. Geological Survey in cooperation with the Texas Commission on Environmental Quality. More than 7.8 million values of daily mean streamflow for 712 U.S. Geological Survey streamflow-gaging stations in Texas were analyzed. To account for the influence of streamflow probability on the drainage-area ratio method, 34 percentile ranges were considered. The 34 ranges are the 4 quartiles (0-25, 25-50, 50-75, and 75-100 percent), the 5 intervals of the lower tail of the streamflow distribution (0-1, 1-2, 2-3, 3-4, and 4-5 percent), the 20 quintiles of the 4 quartiles (0-5, 5-10, 10-15, 15-20, 20-25, 25-30, 30-35, 35-40, 40-45, 45-50, 50-55, 55-60, 60-65, 65-70, 70-75, 75-80, 80-85, 85-90, 90-95, and 95-100 percent), and the 5 intervals of the upper tail of the streamflow distribution (95-96, 96-97, 97-98, 98-99 and 99-100 percent). For each of the 253,116 (712X711/2) unique pairings of stations and for each of the 34 percentile ranges, the concurrent daily mean streamflow values available for the two stations provided for station-pair application of the drainage-area ratio method. For each station pair, specific statistical summarization (median, mean, and standard deviation) of both the exponent and bias-correction components of the drainage-area ratio
Doyle, J M; Quinn, K; Bodenstein, Y A; Wu, C O; Danthi, N; Lauer, M S
2015-09-01
Previous reports from National Institutes of Health and National Science Foundation have suggested that peer review scores of funded grants bear no association with grant citation impact and productivity. This lack of association, if true, may be particularly concerning during times of increasing competition for increasingly limited funds. We analyzed the citation impact and productivity for 1755 de novo investigator-initiated R01 grants funded for at least 2 years by National Institute of Mental Health between 2000 and 2009. Consistent with previous reports, we found no association between grant percentile ranking and subsequent productivity and citation impact, even after accounting for subject categories, years of publication, duration and amounts of funding, as well as a number of investigator-specific measures. Prior investigator funding and academic productivity were moderately strong predictors of grant citation impact. PMID:26033238
A Rational Method for Ranking Engineering Programs.
ERIC Educational Resources Information Center
Glower, Donald D.
1980-01-01
Compares two methods for ranking academic programs, the opinion poll v examination of career successes of the program's alumni. For the latter, "Who's Who in Engineering" and levels of research funding provided data. Tables display resulting data and compare rankings by the two methods for chemical engineering and civil engineering. (CS)
Augmenting the Deliberative Method for Ranking Risks.
Susel, Irving; Lasley, Trace; Montezemolo, Mark; Piper, Joel
2016-01-01
The Department of Homeland Security (DHS) characterized and prioritized the physical cross-border threats and hazards to the nation stemming from terrorism, market-driven illicit flows of people and goods (illegal immigration, narcotics, funds, counterfeits, and weaponry), and other nonmarket concerns (movement of diseases, pests, and invasive species). These threats and hazards pose a wide diversity of consequences with very different combinations of magnitudes and likelihoods, making it very challenging to prioritize them. This article presents the approach that was used at DHS to arrive at a consensus regarding the threats and hazards that stand out from the rest based on the overall risk they pose. Due to time constraints for the decision analysis, it was not feasible to apply multiattribute methodologies like multiattribute utility theory or the analytic hierarchy process. Using a holistic approach was considered, such as the deliberative method for ranking risks first published in this journal. However, an ordinal ranking alone does not indicate relative or absolute magnitude differences among the risks. Therefore, the use of the deliberative method for ranking risks is not sufficient for deciding whether there is a material difference between the top-ranked and bottom-ranked risks, let alone deciding what the stand-out risks are. To address this limitation of ordinal rankings, the deliberative method for ranking risks was augmented by adding an additional step to transform the ordinal ranking into a ratio scale ranking. This additional step enabled the selection of stand-out risks to help prioritize further analysis. PMID:26224206
Fuzzy Multicriteria Ranking of Aluminium Coating Methods
NASA Astrophysics Data System (ADS)
Batzias, A. F.
2007-12-01
This work deals with multicriteria ranking of aluminium coating methods. The alternatives used are: sulfuric acid anodization, A1; oxalic acid anodization, A2; chromic acid anodization, A3; phosphoric acid anodization, A4; integral color anodizing, A5; chemical conversion coating, A6; electrostatic powder deposition, A7. The criteria used are: cost of production, f1; environmental friendliness of production process, f2; appearance (texture), f3; reflectivity, f4; response to coloring, f5; corrosion resistance, f6; abrasion resistance, f7; fatigue resistance, f8. Five experts coming from relevant industrial units set grades to the criteria vector and the preference matrix according to a properly modified Delphi method. Sensitivity analysis of the ranked first alternative A1 against the `second best', which was A3 at low and A7 at high resolution levels proved that the solution is robust. The dependence of anodized products quality on upstream processes is presented and the impact of energy price increase on industrial cost is discussed.
Image Quality Ranking Method for Microscopy
Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.
2016-01-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703
Image Quality Ranking Method for Microscopy.
Koho, Sami; Fazeli, Elnaz; Eriksson, John E; Hänninen, Pekka E
2016-01-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics. PMID:27364703
Image Quality Ranking Method for Microscopy
NASA Astrophysics Data System (ADS)
Koho, Sami; Fazeli, Elnaz; Eriksson, John E.; Hänninen, Pekka E.
2016-07-01
Automated analysis of microscope images is necessitated by the increased need for high-resolution follow up of events in time. Manually finding the right images to be analyzed, or eliminated from data analysis are common day-to-day problems in microscopy research today, and the constantly growing size of image datasets does not help the matter. We propose a simple method and a software tool for sorting images within a dataset, according to their relative quality. We demonstrate the applicability of our method in finding good quality images in a STED microscope sample preparation optimization image dataset. The results are validated by comparisons to subjective opinion scores, as well as five state-of-the-art blind image quality assessment methods. We also show how our method can be applied to eliminate useless out-of-focus images in a High-Content-Screening experiment. We further evaluate the ability of our image quality ranking method to detect out-of-focus images, by extensive simulations, and by comparing its performance against previously published, well-established microscopy autofocus metrics.
A Ranking Method for Evaluating Constructed Responses
ERIC Educational Resources Information Center
Attali, Yigal
2014-01-01
This article presents a comparative judgment approach for holistically scored constructed response tasks. In this approach, the grader rank orders (rather than rate) the quality of a small set of responses. A prior automated evaluation of responses guides both set formation and scaling of rankings. Sets are formed to have similar prior scores and…
Vavalle, Nicholas A; Schoell, Samantha L; Weaver, Ashley A; Stitzel, Joel D; Gayzik, F Scott
2014-11-01
Human body finite element models (FEMs) are a valuable tool in the study of injury biomechanics. However, the traditional model development process can be time-consuming. Scaling and morphing an existing FEM is an attractive alternative for generating morphologically distinct models for further study. The objective of this work is to use a radial basis function to morph the Global Human Body Models Consortium (GHBMC) average male model (M50) to the body habitus of a 95th percentile male (M95) and to perform validation tests on the resulting model. The GHBMC M50 model (v. 4.3) was created using anthropometric and imaging data from a living subject representing a 50th percentile male. A similar dataset was collected from a 95th percentile male (22,067 total images) and was used in the morphing process. Homologous landmarks on the reference (M50) and target (M95) geometries, with the existing FE node locations (M50 model), were inputs to the morphing algorithm. The radial basis function was applied to morph the FE model. The model represented a mass of 103.3 kg and contained 2.2 million elements with 1.3 million nodes. Simulations of the M95 in seven loading scenarios were presented ranging from a chest pendulum impact to a lateral sled test. The morphed model matched anthropometric data to within a rootmean square difference of 4.4% while maintaining element quality commensurate to the M50 model and matching other anatomical ranges and targets. The simulation validation data matched experimental data well in most cases. PMID:26192960
Can Percentiles Replace Raw Scores in the Statistical Analysis of Test Data?
ERIC Educational Resources Information Center
Zimmerman, Donald W.; Zumbo, Bruno D.
2005-01-01
Educational and psychological testing textbooks typically warn of the inappropriateness of performing arithmetic operations and statistical analysis on percentiles instead of raw scores. This seems inconsistent with the well-established finding that transforming scores to ranks and using nonparametric methods often improves the validity and power…
Bayes method for low rank tensor estimation
NASA Astrophysics Data System (ADS)
Suzuki, Taiji; Kanagawa, Heishiro
2016-03-01
We investigate the statistical convergence rate of a Bayesian low-rank tensor estimator, and construct a Bayesian nonlinear tensor estimator. The problem setting is the regression problem where the regression coefficient forms a tensor structure. This problem setting occurs in many practical applications, such as collaborative filtering, multi-task learning, and spatio-temporal data analysis. The convergence rate of the Bayes tensor estimator is analyzed in terms of both in-sample and out-of-sample predictive accuracies. It is shown that a fast learning rate is achieved without any strong convexity of the observation. Moreover, we extend the tensor estimator to a nonlinear function estimator so that we estimate a function that is a tensor product of several functions.
Alternative Statistical Frameworks for Student Growth Percentile Estimation
ERIC Educational Resources Information Center
Lockwood, J. R.; Castellano, Katherine E.
2015-01-01
This article suggests two alternative statistical approaches for estimating student growth percentiles (SGP). The first is to estimate percentile ranks of current test scores conditional on past test scores directly, by modeling the conditional cumulative distribution functions, rather than indirectly through quantile regressions. This would…
Diagrammatic perturbation methods in networks and sports ranking combinatorics
NASA Astrophysics Data System (ADS)
Park, Juyong
2010-04-01
Analytic and computational tools developed in statistical physics are being increasingly applied to the study of complex networks. Here we present recent developments in the diagrammatic perturbation methods for the exponential random graph models, and apply them to the combinatoric problem of determining the ranking of nodes in directed networks that represent pairwise competitions.
Network Selection: A Method for Ranked Lists Selection
Figini, Silvia
2012-01-01
We consider the problem of finding the set of rankings that best represents a given group of orderings on the same collection of elements (preference lists). This problem arises from social choice and voting theory, in which each voter gives a preference on a set of alternatives, and a system outputs a single preference order based on the observed voters’ preferences. In this paper, we observe that, if the given set of preference lists is not homogeneous, a unique true underling ranking might not exist. Moreover only the lists that share the highest amount of information should be aggregated, and thus multiple rankings might provide a more feasible solution to the problem. In this light, we propose Network Selection, an algorithm that, given a heterogeneous group of rankings, first discovers the different communities of homogeneous rankings and then combines only the rank orderings belonging to the same community into a single final ordering. Our novel approach is inspired by graph theory; indeed our set of lists can be loosely read as the nodes of a network. As a consequence, only the lists populating the same community in the network would then be aggregated. In order to highlight the strength of our proposal, we show an application both on simulated and on two real datasets, namely a financial and a biological dataset. Experimental results on simulated data show that Network Selection can significantly outperform existing related methods. The other way around, the empirical evidence achieved on real financial data reveals that Network Selection is also able to select the most relevant variables in data mining predictive models, providing a clear superiority in terms of predictive power of the models built. Furthermore, we show the potentiality of our proposal in the bioinformatics field, providing an application to a biological microarray dataset. PMID:22937075
The Consistency and Ranking Method Based on Comparison Linguistic Variable
NASA Astrophysics Data System (ADS)
Zhao, Qisheng; Wei, Fajie; Zhou, Shenghan
The study developed a consistency approximation and ranking method based on the comparison Linguistic variable. The method constructs the consistency fuzzy complementary judgment matrix by using the judgment matrix of linguistic variable. The judgment matrix is defined by the fuzzy set or vague set of comparison linguistic variable. The method obtains the VPIS and VNIS based on TOPSIS method. And the relative similar approach degrees with the distance between alternatives and VPIS or VNIS are defined. Then the study analyzes the impact on quality of evaluation which caused by evaluation method, index weight and appraiser. Finally, the improving methods were discussed, and an example is presented to illustrate the proposed method.
NASA Astrophysics Data System (ADS)
Bruno, Giovanni; Bobbo, Luigi; Vessia, Giovanna
2014-05-01
Is50 and RL indices are commonly used to indirectly estimate the compression strength of a rocky deposit by in situ and in laboratory devices. The widespread use of Point load and Schmidt hammer tests is due to the simplicity and the speediness of the execution of these tests. Their indices can be related to the UCS by means of the ordinary least square regression analyses. Several researchers suggest to take into account the lithology to build high correlated empirical expressions (R2 >0.8) to draw UCS from Is50 or RL values. Nevertheless, the lower and upper bounds of the UCS ranges of values that can be estimated by means of the two indirect indices are not clearly defined yet. Aydin (2009) stated that the Schmidt hammer test shall be used to assess the compression resistance of rocks characterized by UCS>12-20 MPa. On the other hand, the Point load measures can be performed on weak rocks but upper bound values for UCS are not suggested. In this paper, the empirical relationships between UCS, RL and Is50 are searched by means of the percentile method (Bruno et al. 2013). This method is based on looking for the best regression function, between measured data of UCS and one of the indirect indices, drawn from a subset sample of the couples of measures that are the percentile values. These values are taken from the original dataset of both measures by calculating the cumulative function. No hypothesis on the probability distribution of the sample is needed and the procedure shows to be robust with respect to odd values or outliers. In this study, the carbonate sedimentary rocks are investigated. According to the rock mass classification of Dobereiner and De Freitas (1986), the UCS values for the studied rocks range between 'extremely weak' to 'strong'. For the analyzed data, UCS varies between 1,18-270,70 MPa. Thus, through the percentile method the best empirical relationships UCS-Is50 and UCS-RL are plotted. Relationships between Is50 and RL are drawn, too
Applications of fuzzy ranking methods to risk-management decisions
NASA Astrophysics Data System (ADS)
Mitchell, Harold A.; Carter, James C., III
1993-12-01
The Department of Energy is making significant improvements to its nuclear facilities as a result of more stringent regulation, internal audits, and recommendations from external review groups. A large backlog of upgrades has resulted. Currently, a prioritization method is being utilized which relies on a matrix of potential consequence and probability of occurrence. The attributes of the potential consequences considered include likelihood, exposure, public health and safety, environmental impact, site personnel safety, public relations, legal liability, and business loss. This paper describes an improved method which utilizes fuzzy multiple attribute decision methods to rank proposed improvement projects.
Risk-based methods applicable to ranking conceptual designs
Breeding, R.J.; Ortiz, K.; Ringland, J.T.; Lim, J.J.
1993-11-01
In Ginichi Taguchi`s latest book on quality engineering, an emphasis is placed on robust design processes in which quality engineering techniques are brought ``upstream,`` that is, they are utilized as early as possible, preferably in the conceptual design stage. This approach was used in a study of possible future safety system designs for weapons. As an experiment, a method was developed for using probabilistic risk analysis (PRA) techniques to rank conceptual designs for performance against a safety metric for ultimate incorporation into a Pugh matrix evaluation. This represents a high-level UW application of PRA methods to weapons. As with most conceptual designs, details of the implementation were not yet developed; many of the components had never been built, let alone tested. Therefore, our application of risk assessment methods was forced to be at such a high level that the entire evaluation could be performed on a spreadsheet. Nonetheless, the method produced numerical estimates of safety in a manner that was consistent, reproducible, and scrutable. The results enabled us to rank designs to identify areas where returns on research efforts would be the greatest. The numerical estimates were calibrated against what is achievable by current weapon safety systems. The use of expert judgement is inescapable, but these judgements are explicit and the method is easily implemented on an spreadsheet computer program.
Punching Wholes into Parts, or Beating the Percentile Averages.
ERIC Educational Resources Information Center
Carwile, Nancy R.
1990-01-01
Presents a facetious, ingenious resolution to the percentile dilemma concerning above- and below-average test scores. If schools enrolled the same number of pigs as students and tested both groups, the pigs would fill up the bottom half and all children would rank in the top 50 percent. However, some wrinkles need to be ironed out! (MLH)
Consistent linguistic fuzzy preference relations method with ranking fuzzy numbers
NASA Astrophysics Data System (ADS)
Ridzuan, Siti Amnah Mohd; Mohamad, Daud; Kamis, Nor Hanimah
2014-12-01
Multi-Criteria Decision Making (MCDM) methods have been developed to help decision makers in selecting the best criteria or alternatives from the options given. One of the well known methods in MCDM is the Consistent Fuzzy Preference Relation (CFPR) method, essentially utilizes a pairwise comparison approach. This method was later improved to cater subjectivity in the data by using fuzzy set, known as the Consistent Linguistic Fuzzy Preference Relations (CLFPR). The CLFPR method uses the additive transitivity property in the evaluation of pairwise comparison matrices. However, the calculation involved is lengthy and cumbersome. To overcome this problem, a method of defuzzification was introduced by researchers. Nevertheless, the defuzzification process has a major setback where some information may lose due to the simplification process. In this paper, we propose a method of CLFPR that preserves the fuzzy numbers form throughout the process. In obtaining the desired ordering result, a method of ranking fuzzy numbers is utilized in the procedure. This improved procedure for CLFPR is implemented to a case study to verify its effectiveness. This method is useful for solving decision making problems and can be applied to many areas of applications.
Method and apparatus for second-rank tensor generation
NASA Technical Reports Server (NTRS)
Liu, Hua-Kuang (Inventor)
1991-01-01
A method and apparatus are disclosed for generation of second-rank tensors using a photorefractive crystal to perform the outer-product between two vectors via four-wave mixing, thereby taking 2n input data to a control n squared output data points. Two orthogonal amplitude modulated coherent vector beams x and y are expanded and then parallel sides of the photorefractive crystal in exact opposition. A beamsplitter is used to direct a coherent pumping beam onto the crystal at an appropriate angle so as to produce a conjugate beam that is the matrix product of the vector beam that propagates in the exact opposite direction from the pumping beam. The conjugate beam thus separated is the tensor output xy (sup T).
An efficient community detection method based on rank centrality
NASA Astrophysics Data System (ADS)
Jiang, Yawen; Jia, Caiyan; Yu, Jian
2013-05-01
Community detection is a very important problem in social network analysis. Classical clustering approach, K-means, has been shown to be very efficient to detect communities in networks. However, K-means is quite sensitive to the initial centroids or seeds, especially when it is used to detect communities. To solve this problem, in this study, we propose an efficient algorithm K-rank, which selects the top-K nodes with the highest rank centrality as the initial seeds, and updates these seeds by using an iterative technique like K-means. Then we extend K-rank to partition directed, weighted networks, and to detect overlapping communities. The empirical study on synthetic and real networks show that K-rank is robust and better than the state-of-the-art algorithms including K-means, BGLL, LPA, infomap and OSLOM.
Methods for evaluating and ranking transportation energy conservation programs
NASA Astrophysics Data System (ADS)
Santone, L. C.
1981-04-01
The energy conservation programs are assessed in terms of petroleum savings, incremental costs to consumers probability of technical and market success, and external impacts due to environmental, economic, and social factors. Three ranking functions and a policy matrix are used to evaluate the programs. The net present value measure which computes the present worth of petroleum savings less the present worth of costs is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar. The comprehensive ranking function takes external impacts into account. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program.
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic–there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
Research on B Cell Algorithm for Learning to Rank Method Based on Parallel Strategy.
Tian, Yuling; Zhang, Hongxian
2016-01-01
For the purposes of information retrieval, users must find highly relevant documents from within a system (and often a quite large one comprised of many individual documents) based on input query. Ranking the documents according to their relevance within the system to meet user needs is a challenging endeavor, and a hot research topic-there already exist several rank-learning methods based on machine learning techniques which can generate ranking functions automatically. This paper proposes a parallel B cell algorithm, RankBCA, for rank learning which utilizes a clonal selection mechanism based on biological immunity. The novel algorithm is compared with traditional rank-learning algorithms through experimentation and shown to outperform the others in respect to accuracy, learning time, and convergence rate; taken together, the experimental results show that the proposed algorithm indeed effectively and rapidly identifies optimal ranking functions. PMID:27487242
Stabilized thermally beneficiated low rank coal and method of manufacture
Viall, A.J.; Richards, J.M.
2000-07-18
A process is described for reducing the spontaneous combustion tendencies of thermally beneficiated low rank coals employing heat, air or an oxygen containing gas followed by an optional moisture addition. Specific reaction conditions are supplied along with knowledge of equipment types that may be employed on a commercial scale to complete the process.
Stabilized thermally beneficiated low rank coal and method of manufacture
Viall, Arthur J.; Richards, Jeff M.
2000-01-01
A process for reducing the spontaneous combustion tendencies of thermally beneficiated low rank coals employing heat, air or an oxygen containing gas followed by an optional moisture addition. Specific reaction conditions are supplied along with knowledge of equipment types that may be employed on a commercial scale to complete the process.
Stabilized thermally beneficiated low rank coal and method of manufacture
Viall, A.J.; Richards, J.M.
1999-01-26
A process is described for reducing the spontaneous combustion tendencies of thermally beneficiated low rank coals employing heat, air or an oxygen containing gas followed by an optional moisture addition. Specific reaction conditions are supplied along with knowledge of equipment types that may be employed on a commercial scale to complete the process. 3 figs.
Stabilized thermally beneficiated low rank coal and method of manufacture
Viall, Arthur J.; Richards, Jeff M.
1999-01-01
A process for reducing the spontaneous combustion tendencies of thermally beneficiated low rank coals employing heat, air or an oxygen containing gas followed by an optional moisture addition. Specific reaction conditions are supplied along with knowledge of equipment types that may be employed on a commercial scale to complete the process.
Efficient implementation of minimal polynomial and reduced rank extrapolation methods
NASA Technical Reports Server (NTRS)
Sidi, Avram
1990-01-01
The minimal polynomial extrapolation (MPE) and reduced rank extrapolation (RRE) are two effective techniques that have been used in accelerating the convergence of vector sequences, such as those that are obtained from iterative solution of linear and nonlinear systems of equation. Their definitions involve some linear least squares problems, and this causes difficulties in their numerical implementation. Timewise efficient and numerically stable implementations for MPE and RRE are developed. A computer program written in FORTRAN 77 is also appended and applied to some model problems.
Andersen, Erlend K.F.; Hole, Knut Hakon; Lund, Kjersti V.; Sundfor, Kolbein; Kristensen, Gunnar B.; Lyng, Heidi; Malinen, Eirik
2012-03-01
Purpose: To systematically screen the tumor contrast enhancement of locally advanced cervical cancers to assess the prognostic value of two descriptive parameters derived from dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI). Methods and Materials: This study included a prospectively collected cohort of 81 patients who underwent DCE-MRI with gadopentetate dimeglumine before chemoradiotherapy. The following descriptive DCE-MRI parameters were extracted voxel by voxel and presented as histograms for each time point in the dynamic series: normalized relative signal increase (nRSI) and normalized area under the curve (nAUC). The first to 100th percentiles of the histograms were included in a log-rank survival test, resulting in p value and relative risk maps of all percentile-time intervals for each DCE-MRI parameter. The maps were used to evaluate the robustness of the individual percentile-time pairs and to construct prognostic parameters. Clinical endpoints were locoregional control and progression-free survival. The study was approved by the institutional ethics committee. Results: The p value maps of nRSI and nAUC showed a large continuous region of percentile-time pairs that were significantly associated with locoregional control (p < 0.05). These parameters had prognostic impact independent of tumor stage, volume, and lymph node status on multivariate analysis. Only a small percentile-time interval of nRSI was associated with progression-free survival. Conclusions: The percentile-time screening identified DCE-MRI parameters that predict long-term locoregional control after chemoradiotherapy of cervical cancer.
Solutions of interval type-2 fuzzy polynomials using a new ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim; Ghani, Ahmad Termimi Ab.; Ahmad, Noor'Ani
2015-10-01
A few years ago, a ranking method have been introduced in the fuzzy polynomial equations. Concept of the ranking method is proposed to find actual roots of fuzzy polynomials (if exists). Fuzzy polynomials are transformed to system of crisp polynomials, performed by using ranking method based on three parameters namely, Value, Ambiguity and Fuzziness. However, it was found that solutions based on these three parameters are quite inefficient to produce answers. Therefore in this study a new ranking method have been developed with the aim to overcome the inherent weakness. The new ranking method which have four parameters are then applied in the interval type-2 fuzzy polynomials, covering the interval type-2 of fuzzy polynomial equation, dual fuzzy polynomial equations and system of fuzzy polynomials. The efficiency of the new ranking method then numerically considered in the triangular fuzzy numbers and the trapezoidal fuzzy numbers. Finally, the approximate solutions produced from the numerical examples indicate that the new ranking method successfully produced actual roots for the interval type-2 fuzzy polynomials.
Establishing percentiles for junior tennis players based on physical fitness testing results.
Roetert, E P; Piorkowski, P A; Woods, R B; Brown, S W
1995-01-01
An important aspect of this study was the establishment of a data base. A broad data base allows for data on certain parameters to be greatly expanded and will also enhance the use and interpretation of statistical methods. A longitudinal study of these variables may also assist in monitoring the players' progress over a period of time, and can provide a useful supplement to subjective coaching appraisals. The means and standard deviation for each test were calculated according to the USTA age and gender groups, that is, 12s, 14s, and 16s for each separate gender. Additionally, the mean and standard deviations for the ages, heights, and weights of each grouping were also calculated. Once the means and standard deviations were calculated, percentile tables were developed for each of the USTA groupings (by age and gender). The percentiles for each USTA test are presented in Appendix 1. A percentile is defined as the point on the distribution below which a given percentage of the scores is found. Percentiles can provide a norm-referenced interpretation of an individual score within a distribution that often consists of scores from a comparable group of individuals. Using the USTA protocol, players and coaches now have a set of normative data by which individual player's fitness scores may be compared with participants of the USTA Area Training Centers (See appendix 1). From the test results, coaches and players can determine which fitness areas need to be improved for athletes on an individual basis. Specific training programs can then be designed based on an athlete's fitness testing results. Proper interpretation of the USTA fitness testing data base results can lead to an easy way to determine the relative position of a given fitness score in the distribution, recognizing weaker areas for the purpose of injury prevention and performance enhancement. Each player can be given a profile detailing their percentile rank relative to other area training center
Group-based ranking method for online rating systems with spamming attacks
NASA Astrophysics Data System (ADS)
Gao, Jian; Dong, Yu-Wei; Shang, Ming-Sheng; Cai, Shi-Min; Zhou, Tao
2015-04-01
The ranking problem has attracted much attention in real systems. How to design a robust ranking method is especially significant for online rating systems under the threat of spamming attacks. By building reputation systems for users, many well-performed ranking methods have been applied to address this issue. In this letter, we propose a group-based ranking method that evaluates users' reputations based on their grouping behaviors. More specifically, users are assigned with high reputation scores if they always fall into large rating groups. Results on three real data sets indicate that the present method is more accurate and robust than the correlation-based method in the presence of spamming attacks.
Solving the interval type-2 fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-07-01
Polynomial equations with trapezoidal and triangular fuzzy numbers have attracted some interest among researchers in mathematics, engineering and social sciences. There are some methods that have been developed in order to solve these equations. In this study we are interested in introducing the interval type-2 fuzzy polynomial equation and solving it using the ranking method of fuzzy numbers. The ranking method concept was firstly proposed to find real roots of fuzzy polynomial equation. Therefore, the ranking method is applied to find real roots of the interval type-2 fuzzy polynomial equation. We transform the interval type-2 fuzzy polynomial equation to a system of crisp interval type-2 fuzzy polynomial equation. This transformation is performed using the ranking method of fuzzy numbers based on three parameters, namely value, ambiguity and fuzziness. Finally, we illustrate our approach by numerical example.
Methods of computing vocabulary size for the two-parameter rank distribution
NASA Technical Reports Server (NTRS)
Edmundson, H. P.; Fostel, G.; Tung, I.; Underwood, W.
1972-01-01
A summation method is described for computing the vocabulary size for given parameter values in the 1- and 2-parameter rank distributions. Two methods of determining the asymptotes for the family of 2-parameter rank-distribution curves are also described. Tables are computed and graphs are drawn relating paris of parameter values to the vocabulary size. The partial product formula for the Riemann zeta function is investigated as an approximation to the partial sum formula for the Riemann zeta function. An error bound is established that indicates that the partial product should not be used to approximate the partial sum in calculating the vocabulary size for the 2-parameter rank distribution.
Percentile curves for skinfold thickness for Canadian children and youth
Ashley-Martin, Jillian; Maguire, Bryan; Hamilton, David C.
2016-01-01
Background. Skinfold thickness (SFT) measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf) in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009) and 2 (2009/2011). Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth. PMID:27547554
Percentile curves for skinfold thickness for Canadian children and youth.
Kuhle, Stefan; Ashley-Martin, Jillian; Maguire, Bryan; Hamilton, David C
2016-01-01
Background. Skinfold thickness (SFT) measurements are a reliable and feasible method for assessing body fat in children but their use and interpretation is hindered by the scarcity of reference values in representative populations of children. The objective of the present study was to develop age- and sex-specific percentile curves for five SFT measures (biceps, triceps, subscapular, suprailiac, medial calf) in a representative population of Canadian children and youth. Methods. We analyzed data from 3,938 children and adolescents between 6 and 19 years of age who participated in the Canadian Health Measures Survey cycles 1 (2007/2009) and 2 (2009/2011). Standardized procedures were used to measure SFT. Age- and sex-specific centiles for SFT were calculated using the GAMLSS method. Results. Percentile curves were materially different in absolute value and shape for boys and girls. Percentile girls in girls steadily increased with age whereas percentile curves in boys were characterized by a pubertal centered peak. Conclusions. The current study has presented for the first time percentile curves for five SFT measures in a representative sample of Canadian children and youth. PMID:27547554
Systematic comparison of hedonic ranking and rating methods demonstrates few practical differences.
Kozak, Marcin; Cliff, Margaret A
2013-08-01
Hedonic ranking is one of the commonly used methods to evaluate consumer preferences. Some authors suggest that it is the best methodology for discriminating among products, while others recommend hedonic rating. These mixed findings suggest the statistical outcome(s) are dependent on the experimental conditions or a user's expectation of "what is" and "what is not" desirable for evaluating consumer preferences. Therefore, sensory and industry professionals may be uncertain or confused regarding the appropriate application of hedonic tests. This paper would like to put this controversy to rest, by evaluating 3 data sets (3 yogurts, 79 consumers; 6 yogurts, 109 consumers; 4 apple cultivars, 70 consumers) collected using the same consumers and by calculating nontied ranks from hedonic scores. Consumer responses were evaluated by comparing bivariate associations between the methods (nontied ranks, tied ranks, hedonic rating scores) using trellis displays, determining the number of consumers with discrepancies in their responses between the methods, and comparing mean values using conventional statistical analyses. Spearman's rank correlations (0.33-0.84) revealed significant differences between the methods for all products, whether or not means separation tests differentiated the products. The work illustrated the inherent biases associated with hedonic ranking and recommended alternate hedonic methodologies. PMID:23815796
Simultaneous denoising and reconstruction of 5D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-06-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5D seismic data that depends on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank-reduction. When the observed data is extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tends to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data has extremely low signal-to-noise ratio (SNR). The feasibility of the improved 5D seismic data reconstruction method was validated via both 5D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
Simultaneous denoising and reconstruction of 5-D seismic data via damped rank-reduction method
NASA Astrophysics Data System (ADS)
Chen, Yangkang; Zhang, Dong; Jin, Zhaoyu; Chen, Xiaohong; Zu, Shaohuan; Huang, Weilin; Gan, Shuwei
2016-09-01
The Cadzow rank-reduction method can be effectively utilized in simultaneously denoising and reconstructing 5-D seismic data that depend on four spatial dimensions. The classic version of Cadzow rank-reduction method arranges the 4-D spatial data into a level-four block Hankel/Toeplitz matrix and then applies truncated singular value decomposition (TSVD) for rank reduction. When the observed data are extremely noisy, which is often the feature of real seismic data, traditional TSVD cannot be adequate for attenuating the noise and reconstructing the signals. The reconstructed data tend to contain a significant amount of residual noise using the traditional TSVD method, which can be explained by the fact that the reconstructed data space is a mixture of both signal subspace and noise subspace. In order to better decompose the block Hankel matrix into signal and noise components, we introduced a damping operator into the traditional TSVD formula, which we call the damped rank-reduction method. The damped rank-reduction method can obtain a perfect reconstruction performance even when the observed data have extremely low signal-to-noise ratio. The feasibility of the improved 5-D seismic data reconstruction method was validated via both 5-D synthetic and field data examples. We presented comprehensive analysis of the data examples and obtained valuable experience and guidelines in better utilizing the proposed method in practice. Since the proposed method is convenient to implement and can achieve immediate improvement, we suggest its wide application in the industry.
Low-Rank Incremental Methods for Computing Dominant Singular Subspaces
Baker, Christopher G; Gallivan, Dr. Kyle A; Van Dooren, Dr. Paul
2012-01-01
Computing the singular values and vectors of a matrix is a crucial kernel in numerous scientific and industrial applications. As such, numerous methods have been proposed to handle this problem in a computationally efficient way. This paper considers a family of methods for incrementally computing the dominant SVD of a large matrix A. Specifically, we describe a unification of a number of previously disparate methods for approximating the dominant SVD via a single pass through A. We tie the behavior of these methods to that of a class of optimization-based iterative eigensolvers on A'*A. An iterative procedure is proposed which allows the computation of an accurate dominant SVD via multiple passes through A. We present an analysis of the convergence of this iteration, and provide empirical demonstration of the proposed method on both synthetic and benchmark data.
Solving fuzzy polynomial equation and the dual fuzzy polynomial equation using the ranking method
NASA Astrophysics Data System (ADS)
Rahman, Nurhakimah Ab.; Abdullah, Lazim
2014-06-01
Fuzzy polynomials with trapezoidal and triangular fuzzy numbers have attracted interest among some researchers. Many studies have been done by researchers to obtain real roots of fuzzy polynomials. As a result, there are many numerical methods involved in obtaining the real roots of fuzzy polynomials. In this study, we will present the solution to the fuzzy polynomial equation and dual fuzzy polynomial equation using the ranking method of fuzzy numbers and subsequently transforming fuzzy polynomials to crisp polynomials. This transformation is performed using the ranking method based on three parameters, namely Value, Ambiguity and Fuzziness. Finally, we illustrate our approach with two numerical examples for fuzzy polynomial equation and dual fuzzy polynomial equation.
Application of model-free methods for analysis of combustion kinetics of coals with different ranks
Sis, H
2009-07-01
Model-free kinetic approaches were employed to investigate the combustion kinetics of coals with different ranks, namely, lignite, bituminous coal, and anthracite. The experimental data were provided under non-isothermal conditions at different heating rates in the range of 2-25C min{sup -1}. The activation energy values were estimated by two model-free methods, that is, Ozawa-Flynn-Wall and Kissinger-Akahira-Sunose methods. Slightly higher activation energy values were obtained by Ozawa-Flynn-Wall method at a wide range of conversion extent. Variation of activation energy was found to be comparably more significant for lower rank lignite (between 44.82 and 287.56 kJ mol{sup -1}) while less significant for higher rank bituminous coal (between 101.97 and 155.64 kJ mol{sup -1}) and anthracite (between 106.04 and 160.31 kJ mol{sup -1}).
Percentile Curves for Anthropometric Measures for Canadian Children and Youth
Kuhle, Stefan; Maguire, Bryan; Ata, Nicole; Hamilton, David
2015-01-01
Body mass index (BMI) is commonly used to assess a child's weight status but it does not provide information about the distribution of body fat. Since the disease risks associated with obesity are related to the amount and distribution of body fat, measures that assess visceral or subcutaneous fat, such as waist circumference (WC), waist-to-height ratio (WHtR), or skinfolds thickness may be more suitable. The objective of this study was to develop percentile curves for BMI, WC, WHtR, and sum of 5 skinfolds (SF5) in a representative sample of Canadian children and youth. The analysis used data from 4115 children and adolescents between 6 and 19 years of age that participated in the Canadian Health Measures Survey Cycles 1 (2007/2009) and 2 (2009/2011). BMI, WC, WHtR, and SF5 were measured using standardized procedures. Age- and sex-specific centiles were calculated using the LMS method and the percentiles that intersect the adult cutpoints for BMI, WC, and WHtR at age 18 years were determined. Percentile curves for all measures showed an upward shift compared to curves from the pre-obesity epidemic era. The adult cutoffs for overweight and obesity corresponded to the 72nd and 91st percentile, respectively, for both sexes. The current study has presented for the first time percentile curves for BMI, WC, WHtR, and SF5 in a representative sample of Canadian children and youth. The percentile curves presented are meant to be descriptive rather than prescriptive as associations with cardiovascular disease markers or outcomes were not assessed. PMID:26176769
D'Ambrosio, Antonio; Heiser, Willem J
2016-09-01
Preference rankings usually depend on the characteristics of both the individuals judging a set of objects and the objects being judged. This topic has been handled in the literature with log-linear representations of the generalized Bradley-Terry model and, recently, with distance-based tree models for rankings. A limitation of these approaches is that they only work with full rankings or with a pre-specified pattern governing the presence of ties, and/or they are based on quite strict distributional assumptions. To overcome these limitations, we propose a new prediction tree method for ranking data that is totally distribution-free. It combines Kemeny's axiomatic approach to define a unique distance between rankings with the CART approach to find a stable prediction tree. Furthermore, our method is not limited by any particular design of the pattern of ties. The method is evaluated in an extensive full-factorial Monte Carlo study with a new simulation design. PMID:27370072
Methods for evaluating and ranking transportation energy conservation programs. Final report
Not Available
1981-04-30
Methods for comparative evaluations of the Office of Transportation programs designed to help achieve significant reductions in the consumption of petroleum by different forms of transportation while maintaining public, commercial, and industrial mobility are described. Assessments of the programs in terms of petroleum savings, incremental costs to consumers of the technologies and activities, probability of technical and market success, and external impacts due to environmental, economic, and social factors are inputs to the evaluation methodologies presented. The methods described for evaluating the programs on a comparative basis are three ranking functions and a policy matrix listing important attributes of the programs and the technologies and activities with which they are concerned and include the traditional net present value measure which computes the present worth of petroleum savings less the present worth of costs. This is modified by dividing by the present value of DOE funding to obtain a net present value per program dollar, which is the second ranking function. The third ranking function is broader in that it takes external impacts into account and is known as the comprehensive ranking function. Procedures are described for making computations of the ranking functions and the attributes that require computation. Computations are made for the electric vehicle, Stirling engine, gas turbine, and MPG mileage guide program. (MCW)
On Efficient Feature Ranking Methods for High-Throughput Data Analysis.
Liao, Bo; Jiang, Yan; Liang, Wei; Peng, Lihong; Peng, Li; Hanyurwimfura, Damien; Li, Zejun; Chen, Min
2015-01-01
Efficient mining of high-throughput data has become one of the popular themes in the big data era. Existing biology-related feature ranking methods mainly focus on statistical and annotation information. In this study, two efficient feature ranking methods are presented. Multi-target regression and graph embedding are incorporated in an optimization framework, and feature ranking is achieved by introducing structured sparsity norm. Unlike existing methods, the presented methods have two advantages: (1) the feature subset simultaneously account for global margin information as well as locality manifold information. Consequently, both global and locality information are considered. (2) Features are selected by batch rather than individually in the algorithm framework. Thus, the interactions between features are considered and the optimal feature subset can be guaranteed. In addition, this study presents a theoretical justification. Empirical experiments demonstrate the effectiveness and efficiency of the two algorithms in comparison with some state-of-the-art feature ranking methods through a set of real-world gene expression data sets. PMID:26684461
[RANK INDICES METHOD AND ITS USE FOR THE COMPARATIVE ANALYSIS OF POPULATION HEALTH].
Bolshakov, A M; Krutko, V N; Smirnova, T M; Chankov, S V
2016-01-01
There is presented a calculation method aimed to elevate the informative value of the integral indices of the social and hygienic monitoringfor purposes of comparative analysis. The method of rank indices is based on the ranking of monitoring objects on the values of primary indices on the base of which there are calculated the integral such indices as, for example, life expectancy. There are presented results of the use of this method for the comparative analysis of mortality rate in WHO Member States for the period of 1990-2011. There were revealed specialfeatures of mortality trends which cannot be detected when using only mortality rates or the life expectancy. In particular, for Russia there was shown that, in spite of the downward trend in child and adolescent mortality rate observed in the last decade, the country's world rankings for these indices fail to achieve the level of 1990. This means that the competitiveness of the country, sharply declined in the 90's, was not restored until now. There are described some features of the use of the method of rank indices for the analysis of indices of the environment state, public health and its socio-economic determinants. PMID:27266035
Effects of Different Methods of Weighting Subscores on the Composite-Score Ranking of Examinees.
ERIC Educational Resources Information Center
Modu, Christopher C.
The effects of applying different methods of determining different sets of subscore weights on the composite score ranking of examinees were investigated. Four sets of subscore weights were applied to each of three examination results. The scores were from Advanced Placement (AP) Examinations in History of Art, Spanish Language, and Chemistry. One…
Miwa, Makoto; Ohta, Tomoko; Rak, Rafal; Rowley, Andrew; Kell, Douglas B.; Pyysalo, Sampo; Ananiadou, Sophia
2013-01-01
Motivation: To create, verify and maintain pathway models, curators must discover and assess knowledge distributed over the vast body of biological literature. Methods supporting these tasks must understand both the pathway model representations and the natural language in the literature. These methods should identify and order documents by relevance to any given pathway reaction. No existing system has addressed all aspects of this challenge. Method: We present novel methods for associating pathway model reactions with relevant publications. Our approach extracts the reactions directly from the models and then turns them into queries for three text mining-based MEDLINE literature search systems. These queries are executed, and the resulting documents are combined and ranked according to their relevance to the reactions of interest. We manually annotate document-reaction pairs with the relevance of the document to the reaction and use this annotation to study several ranking methods, using various heuristic and machine-learning approaches. Results: Our evaluation shows that the annotated document-reaction pairs can be used to create a rule-based document ranking system, and that machine learning can be used to rank documents by their relevance to pathway reactions. We find that a Support Vector Machine-based system outperforms several baselines and matches the performance of the rule-based system. The success of the query extraction and ranking methods are used to update our existing pathway search system, PathText. Availability: An online demonstration of PathText 2 and the annotated corpus are available for research purposes at http://www.nactem.ac.uk/pathtext2/. Contact: makoto.miwa@manchester.ac.uk Supplementary information: Supplementary data are available at Bioinformatics online. PMID:23813008
Urinary iodine percentile ranges in the United States
Soldin, Offie Porat; Soldin, Steven J.; Pezzullo, John C.
2013-01-01
Background The status of iodine nutrition of a population can be determined by measurement of urinary iodine concentrations since it is thought to indicate dietary iodine intake. Normally, these results are compared to population-based criteria, since there are no reference ranges for urinary iodine. Objective To determine the percentile ranges for urinary iodide (UI) concentrations in normal individuals in the United States. Materials and methods The third National Health and Nutrition Examination Survey (NHANES III) (1988–1994) database of the civilian, non-institutionalized, iodine-sufficient US population was used. The 2.5th to 97.5th percentile ranges for urinary iodine and for urinary iodine per gram creatinine ratio (UI/Cr) (μg/g) were calculated for females and males, 6–89 years of age, each stratified by age groups. Results and conclusions We calculated the percentile ranges for urinary iodine. After exclusions of subjects with goiter or thyroid disease, the study sample included 21,530 subjects; 10,439 males and 11,091 females. For women of childbearing age (14–44 years), urinary iodine concentration 2.5th to 97.5th percentiles are 1.8–65 μg/dl or 36–539 μg/g creatinine. For pregnant women, the ranges are 4.2–55 μg/dl or 33–535 μg/g creatinine. PMID:12559616
Methods for contingency screening and ranking for voltage stability analysis of power systems
Ejebe, G.C.; Irisarri, G.D.; Mokhtari, S.; Obadina, O.; Ristanovic, P.; Tong, J.
1996-02-01
The comparison of performance of four methods for contingency screening and ranking for voltage stability analysis is presented. Three of the methods are existing methods, while a new method is proposed. The performance of all the methods is carried out by comparing with full solutions using a continuation power flow. It is shown that the newly proposed method has the best performance in terms of accuracy and computation time. All the methods are evaluated using a 234-bus system. Additional results using the best performing method are included for a 901-bus power system.
Methods for contingency screening and ranking for voltage stability analysis of power systems
Ejebe, G.C.; Irisarri, G.D.; Mokhtari, S.; Obadina, O.; Ristanovic, P.; Tong, J.
1995-12-31
The comparison of performance of four methods for contingency screening and ranking for voltage stability analysis is presented. Three of the methods are existing methods, while a new method is proposed. The performance of all the methods is carried out by comparing with full solutions using a continuation power flow. It is shown that the newly proposed method has the best performance in terms of accuracy and computation time. All the methods are evaluated using a 234-bus system. Additional results using the best performing method are included for a 901-bus power system.
Supervised descent method with low rank and sparsity constraints for robust face alignment
NASA Astrophysics Data System (ADS)
Sun, Yubao; Hu, Bin; Deng, Jiankang; Li, Xing
2015-03-01
Supervised Descent Method (SDM) learns the descent directions of nonlinear least square objective in a supervised manner, which has been efficiently used for face alignment. However, SDM still may fail in the cases of partial occlusions and serious pose variations. To deal with this issue, we present a new method for robust face alignment by utilizing the low rank prior of human face and enforcing sparse structure of the descent directions. Our approach consists of low rank face frontalization and sparse descent steps. Firstly, in terms of the low rank prior of face image, we recover such a low-rank face from its deformed image and the associated deformation despite significant distortion and corruption. Alignment of the recovered frontal face image is more simple and effective. Then, we propose a sparsity regularized supervised descent model by enforcing the sparse structure of the descent directions under the l1constraint, which makes the model more effective in computation and robust to partial occlusion. Extensive results on several benchmarks demonstrate that the proposed method is robust to facial occlusions and pose variations
NASA Technical Reports Server (NTRS)
Steinberg, Theodore A.; Rucker, Michelle A.; Beeson, Harold D.
1989-01-01
The 316, 321, 440C, and 17-4 PH stainless steels, as well as Inconel 600, Inconel 718, Waspaloy, Monel 400, and Al 2219, have been evaluated for relative nonflammability in a high-pressure oxygen environment with a view to the comparative advantages of four different flammability-ranking methods. The effects of changes in test pressure, sample diameter, promoter type, and sample configuration on ranking method results are evaluated; ranking methods employing velocity as the primary ranking criterion are limited by diameter effects, while those which use extinguishing pressure are nonselective for metals with similar flammabilities.
An Activity for Learning to Find Percentiles
ERIC Educational Resources Information Center
Cox, Richard G.
2016-01-01
This classroom activity is designed to help students practice calculating percentiles. The approach of the activity involves physical sorting and full classroom participation in each calculation. The design encourages a more engaged approach than simply having students make a calculation with numbers on a paper.
Efficient completion for corrupted low-rank images via alternating direction method
NASA Astrophysics Data System (ADS)
Li, Wei; Zhao, Lei; Xu, Duanqing; Lu, Dongming
2014-05-01
We propose an efficient and easy-to-implement method to settle the inpainting problem for low-rank images following the recent studies about low-rank matrix completion. In general, our method has three steps: first, corresponding to the three channels of RGB color space, an incomplete image is split into three incomplete matrices; second, each matrix is restored by solving a convex problem derived from the nuclear norm relaxation; at last, the three recovered matrices are merged to produce the final output. During the process, in order to efficiently solve the nuclear norm minimization problem, we employ the alternating direction method. Except for the basic image inpainting problem, we also enable our method to handle cases where corrupted images not only have missing values but also have noisy entries. Our experiments show that our method outperforms the existing inpainting techniques both quantitatively and qualitatively. We also demonstrate that our method is capable of processing many other situations, including block-wise low-rank image completion, large-scale image restoration, and object removal.
Prioritizing sewer rehabilitation projects using AHP-PROMETHEE II ranking method.
Kessili, Abdelhak; Benmamar, Saadia
2016-01-01
The aim of this paper is to develop a methodology for the prioritization of sewer rehabilitation projects for Algiers (Algeria) sewer networks to support the National Sanitation Office in its challenge to make decisions on prioritization of sewer rehabilitation projects. The methodology applies multiple-criteria decision making. The study includes 47 projects (collectors) and 12 criteria to evaluate them. These criteria represent the different issues considered in the prioritization of the projects, which are structural, hydraulic, environmental, financial, social and technical. The analytic hierarchy process (AHP) is used to determine weights of the criteria and the Preference Ranking Organization Method for Enrichment Evaluations (PROMETHEE II) method is used to obtain the final ranking of the projects. The model was verified using the sewer data of Algiers. The results have shown that the method can be used for prioritizing sewer rehabilitation projects. PMID:26819383
NIH peer review percentile scores are poorly predictive of grant productivity
Fang, Ferric C; Bowen, Anthony; Casadevall, Arturo
2016-01-01
Peer review is widely used to assess grant applications so that the highest ranked applications can be funded. A number of studies have questioned the ability of peer review panels to predict the productivity of applications, but a recent analysis of grants funded by the National Institutes of Health (NIH) in the US found that the percentile scores awarded by peer review panels correlated with productivity as measured by citations of grant-supported publications. Here, based on a re-analysis of these data for the 102,740 funded grants with percentile scores of 20 or better, we report that these percentile scores are a poor discriminator of productivity. This underscores the limitations of peer review as a means of assessing grant applications in an era when typical success rates are often as low as about 10%. DOI: http://dx.doi.org/10.7554/eLife.13323.001 PMID:26880623
NIH peer review percentile scores are poorly predictive of grant productivity.
Fang, Ferric C; Bowen, Anthony; Casadevall, Arturo
2016-01-01
Peer review is widely used to assess grant applications so that the highest ranked applications can be funded. A number of studies have questioned the ability of peer review panels to predict the productivity of applications, but a recent analysis of grants funded by the National Institutes of Health (NIH) in the US found that the percentile scores awarded by peer review panels correlated with productivity as measured by citations of grant-supported publications. Here, based on a re-analysis of these data for the 102,740 funded grants with percentile scores of 20 or better, we report that these percentile scores are a poor discriminator of productivity. This underscores the limitations of peer review as a means of assessing grant applications in an era when typical success rates are often as low as about 10%. PMID:26880623
Percentile growth charts for biomedical studies using a porcine model.
Corson, A M; Laws, J; Laws, A; Litten, J C; Lean, I J; Clarke, L
2008-12-01
Increasing rates of obesity and heart disease are compromising quality of life for a growing number of people. There is much research linking adult disease with the growth and development both in utero and during the first year of life. The pig is an ideal model for studying the origins of developmental programming. The objective of this paper was to construct percentile growth curves for the pig for use in biomedical studies. The body weight (BW) of pigs was recorded from birth to 150 days of age and their crown-to-rump length was measured over the neonatal period to enable the ponderal index (PI; kg/m3) to be calculated. Data were normalised and percentile curves were constructed using Cole's lambda-mu-sigma (LMS) method for BW and PI. The construction of these percentile charts for use in biomedical research will allow a more detailed and precise tracking of growth and development of individual pigs under experimental conditions. PMID:22444086
Low-rank Quasi-Newton updates for Robust Jacobian lagging in Newton methods
Brown, J.; Brune, P.
2013-07-01
Newton-Krylov methods are standard tools for solving nonlinear problems. A common approach is to 'lag' the Jacobian when assembly or preconditioner setup is computationally expensive, in exchange for some degradation in the convergence rate and robustness. We show that this degradation may be partially mitigated by using the lagged Jacobian as an initial operator in a quasi-Newton method, which applies unassembled low-rank updates to the Jacobian until the next full reassembly. We demonstrate the effectiveness of this technique on problems in glaciology and elasticity. (authors)
Feature selection for splice site prediction: A new method using EDA-based feature ranking
Saeys, Yvan; Degroeve, Sven; Aeyels, Dirk; Rouzé, Pierre; Van de Peer, Yves
2004-01-01
Background The identification of relevant biological features in large and complex datasets is an important step towards gaining insight in the processes underlying the data. Other advantages of feature selection include the ability of the classification system to attain good or even better solutions using a restricted subset of features, and a faster classification. Thus, robust methods for fast feature selection are of key importance in extracting knowledge from complex biological data. Results In this paper we present a novel method for feature subset selection applied to splice site prediction, based on estimation of distribution algorithms, a more general framework of genetic algorithms. From the estimated distribution of the algorithm, a feature ranking is derived. Afterwards this ranking is used to iteratively discard features. We apply this technique to the problem of splice site prediction, and show how it can be used to gain insight into the underlying biological process of splicing. Conclusion We show that this technique proves to be more robust than the traditional use of estimation of distribution algorithms for feature selection: instead of returning a single best subset of features (as they normally do) this method provides a dynamical view of the feature selection process, like the traditional sequential wrapper methods. However, the method is faster than the traditional techniques, and scales better to datasets described by a large number of features. PMID:15154966
The Typicality Ranking Task: A New Method to Derive Typicality Judgments from Children
Ameel, Eef; Storms, Gert
2016-01-01
An alternative method for deriving typicality judgments, applicable in young children that are not familiar with numerical values yet, is introduced, allowing researchers to study gradedness at younger ages in concept development. Contrary to the long tradition of using rating-based procedures to derive typicality judgments, we propose a method that is based on typicality ranking rather than rating, in which items are gradually sorted according to their typicality, and that requires a minimum of linguistic knowledge. The validity of the method is investigated and the method is compared to the traditional typicality rating measurement in a large empirical study with eight different semantic concepts. The results show that the typicality ranking task can be used to assess children’s category knowledge and to evaluate how this knowledge evolves over time. Contrary to earlier held assumptions in studies on typicality in young children, our results also show that preference is not so much a confounding variable to be avoided, but that both variables are often significantly correlated in older children and even in adults. PMID:27322371
Development of a percentile based three-dimensional model of the buttocks in computer system
NASA Astrophysics Data System (ADS)
Wang, Lijing; He, Xueli; Li, Hongpeng
2016-04-01
There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model similar to human buttocks. The 3D buttock percentile model is highly desired in the ergonomics design and evaluation for these products. So far, there is no research on the percentile sizing system of human 3D buttock model. So the purpose of this paper is to develop a new method for building three-dimensional buttock percentile model in computer system. After scanning the 3D shape of buttocks, the cloud data of 3D points is imported into the reverse engineering software (Geomagic) for the reconstructing of the buttock surface model. Five characteristic dimensions of the buttock are measured through mark-points after models being imported into engineering software CATIA. A series of space points are obtained by the intersecting of the cutting slices and 3D buttock surface model, and then are ordered based on the sequence number of the horizontal and vertical slices. The 1st, 5th, 50th, 95th, 99th percentile values of the five dimensions and the spatial coordinate values of the space points are obtained, and used to reconstruct percentile buttock models. This research proposes a establishing method of percentile sizing system of buttock 3D model based on the percentile values of the ischial tuberosities diameter, the distances from margin to ischial tuberosity and the space coordinates value of coordinate points, for establishing the Nth percentile 3D buttock model and every special buttock types model. The proposed method also serves as a useful guidance for the other 3D percentile models establishment for other part in human body with characteristic points.
Development of a percentile based three-dimensional model of the buttocks in computer system
NASA Astrophysics Data System (ADS)
Wang, Lijing; He, Xueli; Li, Hongpeng
2016-05-01
There are diverse products related to human buttocks, which need to be designed, manufactured and evaluated with 3D buttock model. The 3D buttock model used in present research field is just simple approximate model similar to human buttocks. The 3D buttock percentile model is highly desired in the ergonomics design and evaluation for these products. So far, there is no research on the percentile sizing system of human 3D buttock model. So the purpose of this paper is to develop a new method for building three-dimensional buttock percentile model in computer system. After scanning the 3D shape of buttocks, the cloud data of 3D points is imported into the reverse engineering software (Geomagic) for the reconstructing of the buttock surface model. Five characteristic dimensions of the buttock are measured through mark-points after models being imported into engineering software CATIA. A series of space points are obtained by the intersecting of the cutting slices and 3D buttock surface model, and then are ordered based on the sequence number of the horizontal and vertical slices. The 1st, 5th, 50th, 95th, 99th percentile values of the five dimensions and the spatial coordinate values of the space points are obtained, and used to reconstruct percentile buttock models. This research proposes a establishing method of percentile sizing system of buttock 3D model based on the percentile values of the ischial tuberosities diameter, the distances from margin to ischial tuberosity and the space coordinates value of coordinate points, for establishing the Nth percentile 3D buttock model and every special buttock types model. The proposed method also serves as a useful guidance for the other 3D percentile models establishment for other part in human body with characteristic points.
Analysis of extreme top event frequency percentiles based on fast probability integration
Staple, B.; Haskin, F.E.
1993-10-01
In risk assessments, a primary objective is to determine the frequency with which a collection of initiating and basic events, E{sub e} leads to some undesired top event, T. Uncertainties in the occurrence rates, x{sub t}, assigned to the initiating and basic events cause uncertainty in the top event frequency, z{sub T}. The quantification of the uncertainty in z{sub T} is an essential part of risk assessment called uncertainty analysis. In the past, it has been difficult to evaluate the extreme percentiles of output variables like z{sub T}. Analytic methods such as the method of moments do not provide estimates of output percentiles and the Monte Carlo (MC) method can be used to estimate extreme output percentiles only by resorting to large sample sizes. A promising altemative to these methods is the fast probability integration (FPI) methods. These methods approximate the integrals of multi-variate functions, representing percentiles of interest, without recourse to multi-dimensional numerical integration. FPI methods give precise results and have been demonstrated to be more efficient than MC methods for estimating extreme output percentiles. FPI allows the analyst to choose extreme percentiles of interest and perform sensitivity analyses in those regions. Such analyses can provide valuable insights as to the events driving the top event frequency response in extreme probability regions. In this paper, FPI methods are adapted a) to precisely estimate extreme top event frequency percentiles and b) to allow the quantification of sensitivity measures at these extreme percentiles. In addition, the relative precision and efficiency of alternative methods for treating lognormally distributed inputs is investigated. The methodology is applied to the top event frequency expression for the dominant accident sequence from a risk assessment of Grand Gulf nuclear power plant.
A Novel Hepatocellular Carcinoma Image Classification Method Based on Voting Ranking Random Forests
Xia, Bingbing; Jiang, Huiyan; Liu, Huiling; Yi, Dehui
2016-01-01
This paper proposed a novel voting ranking random forests (VRRF) method for solving hepatocellular carcinoma (HCC) image classification problem. Firstly, in preprocessing stage, this paper used bilateral filtering for hematoxylin-eosin (HE) pathological images. Next, this paper segmented the bilateral filtering processed image and got three different kinds of images, which include single binary cell image, single minimum exterior rectangle cell image, and single cell image with a size of n⁎n. After that, this paper defined atypia features which include auxiliary circularity, amendment circularity, and cell symmetry. Besides, this paper extracted some shape features, fractal dimension features, and several gray features like Local Binary Patterns (LBP) feature, Gray Level Cooccurrence Matrix (GLCM) feature, and Tamura features. Finally, this paper proposed a HCC image classification model based on random forests and further optimized the model by voting ranking method. The experiment results showed that the proposed features combined with VRRF method have a good performance in HCC image classification problem. PMID:27293477
Comparison of Krylov subspace methods on the PageRank problem
NASA Astrophysics Data System (ADS)
Del Corso, Gianna M.; Gulli, Antonio; Romani, Francesco
2007-12-01
PageRank algorithm plays a very important role in search engine technology and consists in the computation of the eigenvector corresponding to the eigenvalue one of a matrix whose size is now in the billions. The problem incorporates a parameter [alpha] that determines the difficulty of the problem. In this paper, the effectiveness of stationary and nonstationary methods are compared on some portion of real web matrices for different choices of [alpha]. We see that stationary methods are very reliable and more competitive when the problem is well conditioned, that is for small values of [alpha]. However, for large values of the parameter [alpha] the problem becomes more difficult and methods such as preconditioned BiCGStab or restarted preconditioned GMRES become competitive with stationary methods in terms of Mflops count as well as in number of iterations necessary to reach convergence.
Hypothesis Testing of Population Percentiles via the Wald Test with Bootstrap Variance Estimates
Johnson, William D.; Romer, Jacob E.
2016-01-01
Testing the equality of percentiles (quantiles) between populations is an effective method for robust, nonparametric comparison, especially when the distributions are asymmetric or irregularly shaped. Unlike global nonparametric tests for homogeneity such as the Kolmogorv-Smirnov test, testing the equality of a set of percentiles (i.e., a percentile profile) yields an estimate of the location and extent of the differences between the populations along the entire domain. The Wald test using bootstrap estimates of variance of the order statistics provides a unified method for hypothesis testing of functions of the population percentiles. Simulation studies are conducted to show performance of the method under various scenarios and to give suggestions on its use. Several examples are given to illustrate some useful applications to real data. PMID:27034909
mtDNA analysis of 174 Eurasian populations using a new iterative rank correlation method.
Juhász, Zoltán; Fehér, Tibor; Németh, Endre; Pamjav, Horolma
2016-02-01
In this study, we analyse 27-dimensional mtDNA haplogroup distributions of 174 Eurasian, North-African and American populations, including numerous ancient data as well. The main contribution of this work was the description of the haplogroup distribution of recent and ancient populations as compounds of certain hypothetic ancient core populations immediately or indirectly determining the migration processes in Eurasia for a long time. To identify these core populations, we developed a new iterative algorithm determining clusters of the 27 mtDNA haplogroups studied having strong rank correlation among each other within a definite subset of the populations. Based on this study, the current Eurasian populations can be considered as compounds of three early core populations regarding to maternal lineages. We wanted to show that a simultaneous analysis of ancient and recent data using a new iterative rank correlation algorithm and the weighted SOC learning technique may reveal the most important and deterministic migration processes in the past. This technique allowed us to determine geographically, historically and linguistically well-interpretable clusters of our dataset having a very specific, hardly classifiable structure. The method was validated using a 2-dimensional stepping stone model. PMID:26142878
Kitsiou, Dimitra; Coccossis, Harry; Karydis, Michael
2002-02-01
Coastal ecosystems are increasingly threatened by short-sighted management policies that focus on human activities rather than the systems that sustain them. The early assessment of the impacts of human activities on the quality of the environment in coastal areas is important for decision-making, particularly in cases of environment/development conflicts, such as environmental degradation and saturation in tourist areas. In the present study, a methodology was developed for the multi-dimensional evaluation and ranking of coastal areas using a set of criteria and based on the combination of multiple criteria choice methods and Geographical Information Systems (GIS). The northeastern part of the island of Rhodes in the Aegean Sea, Greece was the case study area. A distinction in sub-areas was performed and they were ranked according to socio-economic and environmental parameters. The robustness of the proposed methodology was assessed using different configurations of the initial criteria and reapplication of the process. The advantages and disadvantages, as well as the usefulness of this methodology for comparing the status of coastal areas and evaluating their potential for further development based on various criteria, is further discussed. PMID:11846155
Probability Elicitation Under Severe Time Pressure: A Rank-Based Method.
Jaspersen, Johannes G; Montibeller, Gilberto
2015-07-01
Probability elicitation protocols are used to assess and incorporate subjective probabilities in risk and decision analysis. While most of these protocols use methods that have focused on the precision of the elicited probabilities, the speed of the elicitation process has often been neglected. However, speed is also important, particularly when experts need to examine a large number of events on a recurrent basis. Furthermore, most existing elicitation methods are numerical in nature, but there are various reasons why an expert would refuse to give such precise ratio-scale estimates, even if highly numerate. This may occur, for instance, when there is lack of sufficient hard evidence, when assessing very uncertain events (such as emergent threats), or when dealing with politicized topics (such as terrorism or disease outbreaks). In this article, we adopt an ordinal ranking approach from multicriteria decision analysis to provide a fast and nonnumerical probability elicitation process. Probabilities are subsequently approximated from the ranking by an algorithm based on the principle of maximum entropy, a rule compatible with the ordinal information provided by the expert. The method can elicit probabilities for a wide range of different event types, including new ways of eliciting probabilities for stochastically independent events and low-probability events. We use a Monte Carlo simulation to test the accuracy of the approximated probabilities and try the method in practice, applying it to a real-world risk analysis recently conducted for DEFRA (the U.K. Department for the Environment, Farming and Rural Affairs): the prioritization of animal health threats. PMID:25850859
Site rank is formulated for ranking the relative hazard of contamination sources and vulnerability of drinking water wells. Site rank can be used with a variety of groundwater flow and transport models.
NASA Astrophysics Data System (ADS)
Mueen, Zeina; Ramli, Razamin; Zaibidi, Nerda Zura
2016-08-01
In this paper, we propose a procedure to find different performance measurements under crisp value terms for new single fuzzy queue FM/F(H1,H2)/1 with two classes, where arrival rate and service rates are all fuzzy numbers which are represented by triangular and trapezoidal fuzzy numbers. The basic idea is to obtain exact crisp values from the fuzzy value, which is more realistic in the practical queueing system. This is done by adopting left and right ranking method to remove the fuzziness before computing the performance measurements using conventional queueing theory. The main advantage of this approach is its simplicity in application, giving exact real data around fuzzy values. This approach can also be used in all types of queueing systems by taking two types of symmetrical linear membership functions. Numerical illustration is solved in this article to obtain two groups of crisp values in the queueing system under consideration.
Najafpanah, Mohammad Javad; Sadeghi, Mostafa; Bakhtiarizadeh, Mohammad Reza
2013-01-01
Identification of reference genes with stable levels of gene expression is an important prerequisite for obtaining reliable results in analysis of gene expression data using quantitative real time PCR (RT-qPCR). Since the underlying assumption of reference genes is that expressed at the exact same level in all sample types, in this study, we evaluated the expression stability of nine most commonly used endogenous controls (GAPDH, ACTB, 18S rRNA, RPS18, HSP-90, ALAS, HMBS, ACAC, and B2M) in four different tissues of the domestic goat, Capra hircus, including liver, visceral, subcutaneous fat and longissimus muscles, across different experimental treatments (a standard diet prepared using the NRC computer software as control and the same diet plus one mg chromium/day). We used six different software programs for ranking of reference genes and found that individual rankings of the genes differed among them. Additionally, there was a significant difference in ranking patterns of the studied genes among different tissues. A rank aggregation method was applied to combine the ranking lists of the six programs to a consensus ranking. Our results revealed that HSP-90 was nearly always among the two most stable genes in all studied tissues. Therefore, it is recommended for accurate normalization of RT-qPCR data in goats, while GAPDH, ACTB, and RPS18 showed the most varied expressions and should be avoided as reference genes. PMID:24358246
A finite field method for calculating molecular polarizability tensors for arbitrary multipole rank.
Elking, Dennis M; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G
2011-11-30
A finite field method for calculating spherical tensor molecular polarizability tensors α(lm;l'm') = ∂Δ(lm)/∂ϕ(l'm')* by numerical derivatives of induced molecular multipole Δ(lm) with respect to gradients of electrostatic potential ϕ(l'm')* is described for arbitrary multipole ranks l and l'. Interconversion formulae for transforming multipole moments and polarizability tensors between spherical and traceless Cartesian tensor conventions are derived. As an example, molecular polarizability tensors up to the hexadecapole-hexadecapole level are calculated for water using the following ab initio methods: Hartree-Fock (HF), Becke three-parameter Lee-Yang-Parr exchange-correlation functional (B3LYP), Møller-Plesset perturbation theory up to second order (MP2), and Coupled Cluster theory with single and double excitations (CCSD). In addition, intermolecular electrostatic and polarization energies calculated by molecular multipoles and polarizability tensors are compared with ab initio reference values calculated by the Reduced Variation Space method for several randomly oriented small molecule dimers separated by a large distance. It is discussed how higher order molecular polarizability tensors can be used as a tool for testing and developing new polarization models for future force fields. PMID:21915883
A communication-avoiding, hybrid-parallel, rank-revealing orthogonalization method.
Hoemmen, Mark
2010-11-01
Orthogonalization consumes much of the run time of many iterative methods for solving sparse linear systems and eigenvalue problems. Commonly used algorithms, such as variants of Gram-Schmidt or Householder QR, have performance dominated by communication. Here, 'communication' includes both data movement between the CPU and memory, and messages between processors in parallel. Our Tall Skinny QR (TSQR) family of algorithms requires asymptotically fewer messages between processors and data movement between CPU and memory than typical orthogonalization methods, yet achieves the same accuracy as Householder QR factorization. Furthermore, in block orthogonalizations, TSQR is faster and more accurate than existing approaches for orthogonalizing the vectors within each block ('normalization'). TSQR's rank-revealing capability also makes it useful for detecting deflation in block iterative methods, for which existing approaches sacrifice performance, accuracy, or both. We have implemented a version of TSQR that exploits both distributed-memory and shared-memory parallelism, and supports real and complex arithmetic. Our implementation is optimized for the case of orthogonalizing a small number (5-20) of very long vectors. The shared-memory parallel component uses Intel's Threading Building Blocks, though its modular design supports other shared-memory programming models as well, including computation on the GPU. Our implementation achieves speedups of 2 times or more over competing orthogonalizations. It is available now in the development branch of the Trilinos software package, and will be included in the 10.8 release.
A Spatial Overlay Ranking Method for a Geospatial Search of Text Objects
Lanfear, Kenneth J.
2006-01-01
Earth-science researchers need the capability to find relevant information by location and topic. Conventional geographic techniques that simply check whether polygons intersect can efficiently achieve a high recall on location, but can not achieve precision for ranking results in likely order of importance to the reader. A spatial overlay ranking based upon how well an object's footprint matches the search area provides a more effective way to spatially search a collection of reports, and avoids many of the problems associated with an 'in/out' (True/False) boolean search. Moreover, spatial overlay ranking appears to work well even when spatial extent is defined only by a simple bounding box.
A Z-number-based decision making procedure with ranking fuzzy numbers method
NASA Astrophysics Data System (ADS)
Mohamad, Daud; Shaharani, Saidatull Akma; Kamis, Nor Hanimah
2014-12-01
The theory of fuzzy set has been in the limelight of various applications in decision making problems due to its usefulness in portraying human perception and subjectivity. Generally, the evaluation in the decision making process is represented in the form of linguistic terms and the calculation is performed using fuzzy numbers. In 2011, Zadeh has extended this concept by presenting the idea of Z-number, a 2-tuple fuzzy numbers that describes the restriction and the reliability of the evaluation. The element of reliability in the evaluation is essential as it will affect the final result. Since this concept can still be considered as new, available methods that incorporate reliability for solving decision making problems is still scarce. In this paper, a decision making procedure based on Z-numbers is proposed. Due to the limitation of its basic properties, Z-numbers will be first transformed to fuzzy numbers for simpler calculations. A method of ranking fuzzy number is later used to prioritize the alternatives. A risk analysis problem is presented to illustrate the effectiveness of this proposed procedure.
Global adaptive rank truncated product method for gene-set analysis in association studies.
Vilor-Tejedor, Natalia; Calle, M Luz
2014-08-01
Gene set analysis (GSA) aims to assess the overall association of a set of genetic variants with a phenotype and has the potential to detect subtle effects of variants in a gene or a pathway that might be missed when assessed individually. We present a new implementation of the Adaptive Rank Truncated Product method (ARTP) for analyzing the association of a set of Single Nucleotide Polymorphisms (SNPs) in a gene or pathway. The new implementation, referred to as globalARTP, improves the original one by allowing the different SNPs in the set to have different modes of inheritance. We perform a simulation study for exploring the power of the proposed methodology in a set of scenarios with different numbers of causal SNPs with different effect sizes. Moreover, we show the advantage of using the gene set approach in the context of an Alzheimer's disease case-control study where we explore the endocytosis pathway. The new method is implemented in the R function globalARTP of the globalGSA package available at http://cran.r-project.org. PMID:25082012
A Finite Field Method for Calculating Molecular Polarizability Tensors for Arbitrary Multipole Rank
Elking, Dennis M.; Perera, Lalith; Duke, Robert; Darden, Thomas; Pedersen, Lee G.
2011-01-01
A finite field method for calculating spherical tensor molecular polarizability tensors αlm;l′m′ = ∂Δlm/∂ϕl′m′* by numerical derivatives of induced molecular multipole Δlm with respect to gradients of electrostatic potential ϕl′m′* is described for arbitrary multipole ranks l and l′. Inter-conversion formulae for transforming multipole moments and polarizability tensors between spherical and traceless Cartesian tensor conventions are derived. As an example, molecular polarizability tensors up to the hexadecapole-hexadecapole level are calculated for water at the HF, B3LYP, MP2, and CCSD levels. In addition, inter-molecular electrostatic and polarization energies calculated by molecular multipoles and polarizability tensors are compared to ab initio reference values calculated by the Reduced Variation Space (RVS) method for several randomly oriented small molecule dimers separated by a large distance. It is discussed how higher order molecular polarizability tensors can be used as a tool for testing and developing new polarization models for future force fields. PMID:21915883
RAMPART (TM): Risk Assessment Method-Property Analysis and Ranking Tool v.4.0
2007-09-30
RAMPART{trademark}, Risk Assessment Method-property Analysis and Ranking Tool, is a new type of computer software package for the assessment of risk to buildings. RAMPART{trademark} has been developed by Sandia National Laboratories (SNL) for the U.S. General Services Administration (GSA). RAMPART {trademark} has been designed and developed to be a risk-based decision support tool that requires no risk analysis expertise on the part of the user. The RAMPART{trademark} user interface elicits information from the user about the building. The RAMPART{trademark} expert system is a set of rules that embodies GSA corporate knowledge and SNL's risk assessment experience. The RAMPART{trademark} database contains both data entered by the user during a building analysis session and large sets of natural hazard and crime data. RAMPART{trademark} algorithms use these data to assess the risk associated with a given building in the face of certain hazards. Risks arising from five natural hazards (earthquake, hurricane, winter storm, tornado and flood); crime (inside and outside the building); fire and terrorism are calculated. These hazards may cause losses of various kinds. RAMPART{trademark} considers death, injury, loss of mission, loss of property, loss of contents, loss of building use, and first-responder loss. The results of each analysis are presented graphically on the screen and in a written report.
RAMPART (TM): Risk Assessment Method-Property Analysis and Ranking Tool v.4.0
Energy Science and Technology Software Center (ESTSC)
2007-09-30
RAMPART{trademark}, Risk Assessment Method-property Analysis and Ranking Tool, is a new type of computer software package for the assessment of risk to buildings. RAMPART{trademark} has been developed by Sandia National Laboratories (SNL) for the U.S. General Services Administration (GSA). RAMPART {trademark} has been designed and developed to be a risk-based decision support tool that requires no risk analysis expertise on the part of the user. The RAMPART{trademark} user interface elicits information from the user aboutmore » the building. The RAMPART{trademark} expert system is a set of rules that embodies GSA corporate knowledge and SNL's risk assessment experience. The RAMPART{trademark} database contains both data entered by the user during a building analysis session and large sets of natural hazard and crime data. RAMPART{trademark} algorithms use these data to assess the risk associated with a given building in the face of certain hazards. Risks arising from five natural hazards (earthquake, hurricane, winter storm, tornado and flood); crime (inside and outside the building); fire and terrorism are calculated. These hazards may cause losses of various kinds. RAMPART{trademark} considers death, injury, loss of mission, loss of property, loss of contents, loss of building use, and first-responder loss. The results of each analysis are presented graphically on the screen and in a written report.« less
Reduced-rank approximations to the far-field transform in the gridded fast multipole method
NASA Astrophysics Data System (ADS)
Hesford, Andrew J.; Waag, Robert C.
2011-05-01
The fast multipole method (FMM) has been shown to have a reduced computational dependence on the size of finest-level groups of elements when the elements are positioned on a regular grid and FFT convolution is used to represent neighboring interactions. However, transformations between plane-wave expansions used for FMM interactions and pressure distributions used for neighboring interactions remain significant contributors to the cost of FMM computations when finest-level groups are large. The transformation operators, which are forward and inverse Fourier transforms with the wave space confined to the unit sphere, are smooth and well approximated using reduced-rank decompositions that further reduce the computational dependence of the FMM on finest-level group size. The adaptive cross approximation (ACA) is selected to represent the forward and adjoint far-field transformation operators required by the FMM. However, the actual error of the ACA is found to be greater than that predicted using traditional estimates, and the ACA generally performs worse than the approximation resulting from a truncated singular-value decomposition (SVD). To overcome these issues while avoiding the cost of a full-scale SVD, the ACA is employed with more stringent accuracy demands and recompressed using a reduced, truncated SVD. The results show a greatly reduced approximation error that performs comparably to the full-scale truncated SVD without degrading the asymptotic computational efficiency associated with ACA matrix assembly.
Zeng, Xiang-tian; Li, Deng-feng; Yu, Gao-feng
2014-01-01
The aim of this paper is to develop a method for ranking trapezoidal intuitionistic fuzzy numbers (TrIFNs) in the process of decision making in the intuitionistic fuzzy environment. Firstly, the concept of TrIFNs is introduced. Arithmetic operations and cut sets over TrIFNs are investigated. Then, the values and ambiguities of the membership degree and the nonmembership degree for TrIFNs are defined as well as the value-index and ambiguity-index. Finally, a value and ambiguity-based ranking method is developed and applied to solve multiattribute decision making problems in which the ratings of alternatives on attributes are expressed using TrIFNs. A numerical example is examined to demonstrate the implementation process and applicability of the method proposed in this paper. Furthermore, comparison analysis of the proposed method is conducted to show its advantages over other similar methods. PMID:25147854
NASA Astrophysics Data System (ADS)
Carrier, Pierre; Tang, Jok M.; Saad, Yousef; Freericks, James K.
Inhomogeneous dynamical mean-field theory has been employed to solve many interesting strongly interacting problems from transport in multilayered devices to the properties of ultracold atoms in a trap. The main computational step, especially for large systems, is the problem of calculating the inverse of a large sparse matrix to solve Dyson's equation and determine the local Green's function at each lattice site from the corresponding local self-energy. We present a new e_cient algorithm, the Lanczos-based low-rank algorithm, for the calculation of the inverse of a large sparse matrix which yields this local (imaginary time) Green's function. The Lanczos-based low-rank algorithm is based on a domain decomposition viewpoint, but avoids explicit calculation of Schur complements and relies instead on low-rank matrix approximations derived from the Lanczos algorithm, for solving the Dyson equation. We report at least a 25-fold improvement of performance compared to explicit decomposition (such as sparse LU) of the matrix inverse. We also report that scaling relative to matrix sizes, of the low-rank correction method on the one hand and domain decomposition methods on the other, are comparable.
NASA Technical Reports Server (NTRS)
Straeter, T. A.
1972-01-01
The Davidon-Broyden class of rank one, quasi-Newton minimization methods is extended from Euclidean spaces to infinite-dimensional, real Hilbert spaces. For several techniques of choosing the step size, conditions are found which assure convergence of the associated iterates to the location of the minimum of a positive definite quadratic functional. For those techniques, convergence is achieved without the problem of the computation of a one-dimensional minimum at each iteration. The application of this class of minimization methods for the direct computation of the solution of an optimal control problem is outlined. The performance of various members of the class are compared by solving a sample optimal control problem. Finally, the sample problem is solved by other known gradient methods, and the results are compared with those obtained with the rank one quasi-Newton methods.
NASA Astrophysics Data System (ADS)
Gershenson, Carlos
Studies of rank distributions have been popular for decades, especially since the work of Zipf. For example, if we rank words of a given language by use frequency (most used word in English is 'the', rank 1; second most common word is 'of', rank 2), the distribution can be approximated roughly with a power law. The same applies for cities (most populated city in a country ranks first), earthquakes, metabolism, the Internet, and dozens of other phenomena. We recently proposed ``rank diversity'' to measure how ranks change in time, using the Google Books Ngram dataset. Studying six languages between 1800 and 2009, we found that the rank diversity curves of languages are universal, adjusted with a sigmoid on log-normal scale. We are studying several other datasets (sports, economies, social systems, urban systems, earthquakes, artificial life). Rank diversity seems to be universal, independently of the shape of the rank distribution. I will present our work in progress towards a general description of the features of rank change in time, along with simple models which reproduce it
ERIC Educational Resources Information Center
Church, Lewis
2010-01-01
This dissertation answers three research questions: (1) What are the characteristics of a combinatoric measure, based on the Average Search Length (ASL), that performs the same as a probabilistic version of the ASL?; (2) Does the combinatoric ASL measure produce the same performance result as the one that is obtained by ranking a collection of…
Maestri, Matthew; Odel, Jeffrey; Hegdé, Jay
2014-01-01
For scientific, clinical, and machine learning purposes alike, it is desirable to quantify the verbal reports of high-level visual percepts. Methods to do this simply do not exist at present. Here we propose a novel methodological principle to help fill this gap, and provide empirical evidence designed to serve as the initial “proof” of this principle. In the proposed method, subjects view images of real-world scenes and describe, in their own words, what they saw. The verbal description is independently evaluated by several evaluators. Each evaluator assigns a rank score to the subject’s description of each visual object in each image using a novel ranking principle, which takes advantage of the well-known fact that semantic descriptions of real life objects and scenes can usually be rank-ordered. Thus, for instance, “animal,” “dog,” and “retriever” can be regarded as increasingly finer-level, and therefore higher ranking, descriptions of a given object. These numeric scores can preserve the richness of the original verbal description, and can be subsequently evaluated using conventional statistical procedures. We describe an exemplar implementation of this method and empirical data that show its feasibility. With appropriate future standardization and validation, this novel method can serve as an important tool to help quantify the subjective experience of the visual world. In addition to being a novel, potentially powerful testing tool, our method also represents, to our knowledge, the only available method for numerically representing verbal accounts of real-world experience. Given that its minimal requirements, i.e., a verbal description and the ground truth that elicited the description, our method has a wide variety of potential real-world applications. PMID:24624102
Estimation of a monotone percentile residual life function under random censorship.
Franco-Pereira, Alba M; de Uña-Álvarez, Jacobo
2013-01-01
In this paper, we introduce a new estimator of a percentile residual life function with censored data under a monotonicity constraint. Specifically, it is assumed that the percentile residual life is a decreasing function. This assumption is useful when estimating the percentile residual life of units, which degenerate with age. We establish a law of the iterated logarithm for the proposed estimator, and its n-equivalence to the unrestricted estimator. The asymptotic normal distribution of the estimator and its strong approximation to a Gaussian process are also established. We investigate the finite sample performance of the monotone estimator in an extensive simulation study. Finally, data from a clinical trial in primary biliary cirrhosis of the liver are analyzed with the proposed methods. One of the conclusions of our work is that the restricted estimator may be much more efficient than the unrestricted one. PMID:23225621
A Comparison of Growth Percentile and Value-Added Models of Teacher Performance. Working Paper #39
ERIC Educational Resources Information Center
Guarino, Cassandra M.; Reckase, Mark D.; Stacy, Brian W.; Wooldridge, Jeffrey M.
2014-01-01
School districts and state departments of education frequently must choose between a variety of methods to estimating teacher quality. This paper examines under what circumstances the decision between estimators of teacher quality is important. We examine estimates derived from student growth percentile measures and estimates derived from commonly…
Questioning the method and utility of ranking drug harms in drug policy.
Rolles, Stephen; Measham, Fiona
2011-07-01
In a 2010 Lancet paper Nutt et al. propose a model for evaluating and ranking drug harms, building on earlier work by incorporating multi criteria decision analysis. It is argued that problems arise in modelling drug harms using rankable single figure indices when determinants of harm reflect pharmacology translated through a complex prism of social and behavioural variables, in turn influenced by a range of policy environments. The delphic methodolgy used is highly vulnerable to subjective judgements and even the more robust measures, such as drug related death and dependence, can be understood as socially constructed. The failure of the model to dissaggregate drug use harms from those related to the policy environment is also highlighted. Beyond these methodological challenges the utility of single figure index harm rankings is questioned, specifically their role in increasingly redundant legal frameworks utilising a harm-based hierarchy of punitive sanctions. If analysis is to include the capacity to capture the complexity relating to drug using behaviours and environments; specific personal and social risks for particular using populations; and the broader socio-cultural context to contemporary intoxication, there will need to be acceptance that analysis of the various harm vectors must remain separate - the complexity of such analysis is not something that can or should be over generalised to suit political discourse or outdated legal frameworks. PMID:21652195
Birthweight percentiles for twin birth neonates by gestational age in China.
Zhang, Bin; Cao, Zhongqiang; Zhang, Yiming; Yao, Cong; Xiong, Chao; Zhang, Yaqi; Wang, Youjie; Zhou, Aifen
2016-01-01
Localized birthweight references for gestational ages serve as an essential tool in accurate evaluation of atypical birth outcomes. Such references for twin births are currently not available in China. The aim of this study was to construct up-to-data sex specific birth weight references by gestational ages for twin births in China. We conducted a population-based analysis on the data of 22,507 eligible living twin infants with births dated between 8/01/2006 and 8/31/2015 from all 95 hospitals within the Wuhan area. Gestational ages in complete weeks were determined using a combination of last-menstrual-period based (LMP) estimation and ultrasound examination. Smoothed percentile curves were created by the Lambda Mu Sigma (LMS) method. Reference of the 3(rd), 10(th), 25(th), 50(th), 75(th), 90(th), 97(th) percentiles birth weight by sex and gestational age were made using 11,861 male and 10,646 female twin newborns with gestational age 26-42 weeks. Separate birthweight percentiles curves for male and female twins were constructed. In summary, our study firstly presents percentile curves of birthweight by gestational age for Chinese twin neonates. Further research is required for the validation and implementation of twin birthweight curves into clinical practice. PMID:27506479
Birthweight percentiles for twin birth neonates by gestational age in China
Zhang, Bin; Cao, Zhongqiang; Zhang, Yiming; Yao, Cong; Xiong, Chao; Zhang, Yaqi; Wang, Youjie; Zhou, Aifen
2016-01-01
Localized birthweight references for gestational ages serve as an essential tool in accurate evaluation of atypical birth outcomes. Such references for twin births are currently not available in China. The aim of this study was to construct up-to-data sex specific birth weight references by gestational ages for twin births in China. We conducted a population-based analysis on the data of 22,507 eligible living twin infants with births dated between 8/01/2006 and 8/31/2015 from all 95 hospitals within the Wuhan area. Gestational ages in complete weeks were determined using a combination of last-menstrual-period based (LMP) estimation and ultrasound examination. Smoothed percentile curves were created by the Lambda Mu Sigma (LMS) method. Reference of the 3rd, 10th, 25th, 50th, 75th, 90th, 97th percentiles birth weight by sex and gestational age were made using 11,861 male and 10,646 female twin newborns with gestational age 26–42 weeks. Separate birthweight percentiles curves for male and female twins were constructed. In summary, our study firstly presents percentile curves of birthweight by gestational age for Chinese twin neonates. Further research is required for the validation and implementation of twin birthweight curves into clinical practice. PMID:27506479
NASA Astrophysics Data System (ADS)
Quilty, John; Adamowski, Jan; Khalil, Bahaa; Rathinasamy, Maheswaran
2016-03-01
The input variable selection problem has recently garnered much interest in the time series modeling community, especially within water resources applications, demonstrating that information theoretic (nonlinear)-based input variable selection algorithms such as partial mutual information (PMI) selection (PMIS) provide an improved representation of the modeled process when compared to linear alternatives such as partial correlation input selection (PCIS). PMIS is a popular algorithm for water resources modeling problems considering nonlinear input variable selection; however, this method requires the specification of two nonlinear regression models, each with parametric settings that greatly influence the selected input variables. Other attempts to develop input variable selection methods using conditional mutual information (CMI) (an analog to PMI) have been formulated under different parametric pretenses such as k nearest-neighbor (KNN) statistics or kernel density estimates (KDE). In this paper, we introduce a new input variable selection method based on CMI that uses a nonparametric multivariate continuous probability estimator based on Edgeworth approximations (EA). We improve the EA method by considering the uncertainty in the input variable selection procedure by introducing a bootstrap resampling procedure that uses rank statistics to order the selected input sets; we name our proposed method bootstrap rank-ordered CMI (broCMI). We demonstrate the superior performance of broCMI when compared to CMI-based alternatives (EA, KDE, and KNN), PMIS, and PCIS input variable selection algorithms on a set of seven synthetic test problems and a real-world urban water demand (UWD) forecasting experiment in Ottawa, Canada.
Bradshaw, Corey J. A.; Brook, Barry W.
2016-01-01
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68–0.84 Spearman’s ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows. PMID:26930052
Bradshaw, Corey J A; Brook, Barry W
2016-01-01
There are now many methods available to assess the relative citation performance of peer-reviewed journals. Regardless of their individual faults and advantages, citation-based metrics are used by researchers to maximize the citation potential of their articles, and by employers to rank academic track records. The absolute value of any particular index is arguably meaningless unless compared to other journals, and different metrics result in divergent rankings. To provide a simple yet more objective way to rank journals within and among disciplines, we developed a κ-resampled composite journal rank incorporating five popular citation indices: Impact Factor, Immediacy Index, Source-Normalized Impact Per Paper, SCImago Journal Rank and Google 5-year h-index; this approach provides an index of relative rank uncertainty. We applied the approach to six sample sets of scientific journals from Ecology (n = 100 journals), Medicine (n = 100), Multidisciplinary (n = 50); Ecology + Multidisciplinary (n = 25), Obstetrics & Gynaecology (n = 25) and Marine Biology & Fisheries (n = 25). We then cross-compared the κ-resampled ranking for the Ecology + Multidisciplinary journal set to the results of a survey of 188 publishing ecologists who were asked to rank the same journals, and found a 0.68-0.84 Spearman's ρ correlation between the two rankings datasets. Our composite index approach therefore approximates relative journal reputation, at least for that discipline. Agglomerative and divisive clustering and multi-dimensional scaling techniques applied to the Ecology + Multidisciplinary journal set identified specific clusters of similarly ranked journals, with only Nature & Science separating out from the others. When comparing a selection of journals within or among disciplines, we recommend collecting multiple citation-based metrics for a sample of relevant and realistic journals to calculate the composite rankings and their relative uncertainty windows. PMID:26930052
Kalivas, John H; Héberger, Károly; Andries, Erik
2015-04-15
Most multivariate calibration methods require selection of tuning parameters, such as partial least squares (PLS) or the Tikhonov regularization variant ridge regression (RR). Tuning parameter values determine the direction and magnitude of respective model vectors thereby setting the resultant predication abilities of the model vectors. Simultaneously, tuning parameter values establish the corresponding bias/variance and the underlying selectivity/sensitivity tradeoffs. Selection of the final tuning parameter is often accomplished through some form of cross-validation and the resultant root mean square error of cross-validation (RMSECV) values are evaluated. However, selection of a "good" tuning parameter with this one model evaluation merit is almost impossible. Including additional model merits assists tuning parameter selection to provide better balanced models as well as allowing for a reasonable comparison between calibration methods. Using multiple merits requires decisions to be made on how to combine and weight the merits into an information criterion. An abundance of options are possible. Presented in this paper is the sum of ranking differences (SRD) to ensemble a collection of model evaluation merits varying across tuning parameters. It is shown that the SRD consensus ranking of model tuning parameters allows automatic selection of the final model, or a collection of models if so desired. Essentially, the user's preference for the degree of balance between bias and variance ultimately decides the merits used in SRD and hence, the tuning parameter values ranked lowest by SRD for automatic selection. The SRD process is also shown to allow simultaneous comparison of different calibration methods for a particular data set in conjunction with tuning parameter selection. Because SRD evaluates consistency across multiple merits, decisions on how to combine and weight merits are avoided. To demonstrate the utility of SRD, a near infrared spectral data set and a
Examining the Reliability of Student Growth Percentiles Using Multidimensional IRT
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li
2015-01-01
Student growth percentiles (SGPs, Betebenner, 2009) are used to locate a student's current score in a conditional distribution based on the student's past scores. Currently, following Betebenner (2009), quantile regression (QR) is most often used operationally to estimate the SGPs. Alternatively, multidimensional item response theory (MIRT) may…
NASA Astrophysics Data System (ADS)
Farhadzadeh, E. M.; Muradaliyev, A. Z.; Farzaliyev, Y. Z.
2015-10-01
A method and an algorithm of ranking of boiler installations based on their technical and economic indicators are proposed. One of the basic conditions for ranking is the independence of technical and economic indicators. The assessment of their interrelation was carried out with respect to the correlation rate. The analysis of calculation data has shown that the interrelation stability with respect to the value and sign persists only for those indicators that have an evident relationship between each other. One of the calculation steps is the normalization of quantitative estimates of technical and economic indicators, which makes it possible to eliminate differences in dimensions and indicator units. The analysis of the known methods of normalization has allowed one to recommend the relative deviation from the average value as a normalized value and to use the arithmetic mean of the normalized values of independent indicators of each boiler installation as an integrated index of performance reliability and profitability. The fundamental differences from the existing approach to assess the "weak components" of a boiler installation and the quality of monitoring of its operating regimes are that the given approach takes into account the reliability and profitability of the operation of all other analogous boiler installations of an electric power station; it also implements competing elements with respect to the quality of control among the operating personnel of separate boiler installations and is aimed at encouraging an increased quality of maintenance and repairs.
A novel homogenization method for phase field approaches based on partial rank-one relaxation
NASA Astrophysics Data System (ADS)
Mosler, J.; Shchyglo, O.; Montazer Hojjat, H.
2014-08-01
This paper deals with the analysis of homogenization assumptions within phase field theories in a finite strain setting. Such homogenization assumptions define the average bulk's energy within the diffusive interface region where more than one phase co-exist. From a physical point of view, a correct computation of these energies is essential, since they define the driving force of material interfaces between different phases. The three homogenization assumptions considered in this paper are: (a) Voigt/Taylor model, (b) Reuss/Sachs model, and (c) Khachaturyan model. It is shown that these assumptions indeed share some similarities and sometimes lead to the same results. However, they are not equivalent. Only two of them allow the computation of the individual energies of the co-existing phases even within the aforementioned diffusive interface region: the Voigt/Taylor and the Reuss/Sachs model. Such a localization of the averaged energy is important in order to determine and to subsequently interpret the driving force at the interface. Since the Voigt/Taylor and the Reuss/Sachs model are known to be relatively restrictive in terms of kinematics (Voigt/Taylor) and linear momentum (Reuss/Sachs), a novel homogenization approach is advocated. Within a variational setting based on (incremental) energy minimization, the results predicted by the novel approach are bounded by those corresponding to the Voigt/Taylor and the Reuss/Sachs model. The new approach fulfills equilibrium at material interfaces (continuity of the stress vector) and it is kinematically compatible. In sharp contrast to existing approaches, it naturally defines the mismatch energy at incoherent material interfaces. From a mathematical point of view, it can be interpreted as a partial rank-one convexification.
De Lange, Hendrika J; Lahr, Joost; Van der Pol, Joost J C; Faber, Jack H
2010-12-01
Nature development in The Netherlands is often planned on contaminated soils or sediments. This contamination may present a risk for wildlife species desired at those nature development sites and must be assessed by specific risk assessment methods. In a previous study, we developed a method to predict ecological vulnerability in wildlife species by using autecological data and expert judgment; in the current study, this method is further extended to assess ecological vulnerability of food chains and terrestrial and aquatic habitats typical for The Netherlands. The method is applied to six chemicals: Cd, Cu, Zn, dichlorodiphenyltrichloroethane, chlorpyrifos, and ivermectin. The results indicate that species in different food chains differ in vulnerability, with earthworm-based food chains the most vulnerable. Within and between food chains, vulnerability varied with habitat, particularly at low trophic levels. The concept of habitat vulnerability was applied to a case study of four different habitat types in floodplains contaminated with cadmium and zinc along the river Dommel, The Netherlands. The alder floodplain forest habitat contained the most vulnerable species. The differences among habitats were significant for Cd. We further conclude that the method has good potential for application in mapping of habitat vulnerability. PMID:20973107
Parrett, Charles; Hull, J.A.
1986-01-01
Once-monthly streamflow measurements were used to estimate selected percentile discharges on flow-duration curves of monthly mean discharge for 40 ungaged stream sites in the upper Yellowstone River basin in Montana. The estimation technique was a modification of the concurrent-discharge method previously described and used by H.C. Riggs to estimate annual mean discharge. The modified technique is based on the relationship of various mean seasonal discharges to the required discharges on the flow-duration curves. The mean seasonal discharges are estimated from the monthly streamflow measurements, and the percentile discharges are calculated from regression equations. The regression equations, developed from streamflow record at nine gaging stations, indicated a significant log-linear relationship between mean seasonal discharge and various percentile discharges. The technique was tested at two discontinued streamflow-gaging stations; the differences between estimated monthly discharges and those determined from the discharge record ranged from -31 to +27 percent at one site and from -14 to +85 percent at the other. The estimates at one site were unbiased, and the estimates at the other site were consistently larger than the recorded values. Based on the test results, the probable average error of the technique was + or - 30 percent for the 21 sites measured during the first year of the program and + or - 50 percent for the 19 sites measured during the second year. (USGS)
Percentile Distributions of Birth Weight according to Gestational Ages in Korea (2010-2012)
2016-01-01
The Pediatric Growth Chart (2007) is used as a standard reference to evaluate weight and height percentiles of Korean children and adolescents. Although several previous studies provided a useful reference range of newborn birth weight (BW) by gestational age (GA), the BW reference analyzed by sex and plurality is not currently available. Therefore, we aimed to establish a national reference range of neonatal BW percentiles considering GA, sex, and plurality of newborns in Korea. The raw data of all newborns (470,171 in 2010, 471,265 in 2011, and 484,550 in 2012) were analyzed. Using the Korean Statistical Information Service data (2010–2012), smoothed percentile curves (3rd–97th) by GA were created using the lambda-mu-sigma method after exclusion and the data were distinguished by all live births, singleton births, and multiple births. In the entire cohort, male newborns were heavier than female newborns and singletons were heavier than twins. As GA increased, the difference in BW between singleton and multiples increased. Compared to the previous data published 10 years ago in Korea, the BW of newborns 22–23 gestational weeks old was increased, whereas that of others was smaller. Other countries' data were also compared and showed differences in BW of both singleton and multiple newborns. We expect this updated data to be utilized as a reference to improve clinical assessments of newborn growth. PMID:27247504
Percentile Distributions of Birth Weight according to Gestational Ages in Korea (2010-2012).
Lee, Jin Kyoung; Jang, Hye Lim; Kang, Byung Ho; Lee, Kyung-Suk; Choi, Yong-Sung; Shim, Kye Shik; Lim, Jae Woo; Bae, Chong-Woo; Chung, Sung-Hoon
2016-06-01
The Pediatric Growth Chart (2007) is used as a standard reference to evaluate weight and height percentiles of Korean children and adolescents. Although several previous studies provided a useful reference range of newborn birth weight (BW) by gestational age (GA), the BW reference analyzed by sex and plurality is not currently available. Therefore, we aimed to establish a national reference range of neonatal BW percentiles considering GA, sex, and plurality of newborns in Korea. The raw data of all newborns (470,171 in 2010, 471,265 in 2011, and 484,550 in 2012) were analyzed. Using the Korean Statistical Information Service data (2010-2012), smoothed percentile curves (3(rd)-97(th)) by GA were created using the lambda-mu-sigma method after exclusion and the data were distinguished by all live births, singleton births, and multiple births. In the entire cohort, male newborns were heavier than female newborns and singletons were heavier than twins. As GA increased, the difference in BW between singleton and multiples increased. Compared to the previous data published 10 years ago in Korea, the BW of newborns 22-23 gestational weeks old was increased, whereas that of others was smaller. Other countries' data were also compared and showed differences in BW of both singleton and multiple newborns. We expect this updated data to be utilized as a reference to improve clinical assessments of newborn growth. PMID:27247504
Bayes and empirical Bayes methods for reduced rank regression models in matched case-control studies
Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S.
2015-01-01
Summary Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a non-trivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinflated biphenyls in relation to the etiology of non-Hodgkin’s lymphoma. PMID:26575519
Satagopan, Jaya M; Sen, Ananda; Zhou, Qin; Lan, Qing; Rothman, Nathaniel; Langseth, Hilde; Engel, Lawrence S
2016-06-01
Matched case-control studies are popular designs used in epidemiology for assessing the effects of exposures on binary traits. Modern studies increasingly enjoy the ability to examine a large number of exposures in a comprehensive manner. However, several risk factors often tend to be related in a nontrivial way, undermining efforts to identify the risk factors using standard analytic methods due to inflated type-I errors and possible masking of effects. Epidemiologists often use data reduction techniques by grouping the prognostic factors using a thematic approach, with themes deriving from biological considerations. We propose shrinkage-type estimators based on Bayesian penalization methods to estimate the effects of the risk factors using these themes. The properties of the estimators are examined using extensive simulations. The methodology is illustrated using data from a matched case-control study of polychlorinated biphenyls in relation to the etiology of non-Hodgkin's lymphoma. PMID:26575519
Shin, Saemi; Moon, Hyung-Il; Lee, Kwon Seob; Hong, Mun Ki; Byeon, Sang-Hoon
2014-01-01
This study aimed to devise a method for prioritizing hazardous chemicals for further regulatory action. To accomplish this objective, we chose appropriate indicators and algorithms. Nine indicators from the Globally Harmonized System of Classification and Labeling of Chemicals were used to identify categories to which the authors assigned numerical scores. Exposure indicators included handling volume, distribution, and exposure level. To test the method devised by this study, sixty-two harmful substances controlled by the Occupational Safety and Health Act in Korea, including acrylamide, acrylonitrile, and styrene were ranked using this proposed method. The correlation coefficients between total score and each indicator ranged from 0.160 to 0.641, and those between total score and hazard indicators ranged from 0.603 to 0.641. The latter were higher than the correlation coefficients between total score and exposure indicators, which ranged from 0.160 to 0.421. Correlations between individual indicators were low (−0.240 to 0.376), except for those between handling volume and distribution (0.613), suggesting that each indicator was not strongly correlated. The low correlations between each indicator mean that the indicators and independent and were well chosen for prioritizing harmful chemicals. This method proposed by this study can improve the cost efficiency of chemical management as utilized in occupational regulatory systems. PMID:25419874
Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel
2013-12-01
In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance. PMID:24387553
NASA Astrophysics Data System (ADS)
Criado, Regino; García, Esther; Pedroche, Francisco; Romance, Miguel
2013-12-01
In this paper, we show a new technique to analyze families of rankings. In particular, we focus on sports rankings and, more precisely, on soccer leagues. We consider that two teams compete when they change their relative positions in consecutive rankings. This allows to define a graph by linking teams that compete. We show how to use some structural properties of this competitivity graph to measure to what extend the teams in a league compete. These structural properties are the mean degree, the mean strength, and the clustering coefficient. We give a generalization of the Kendall's correlation coefficient to more than two rankings. We also show how to make a dynamic analysis of a league and how to compare different leagues. We apply this technique to analyze the four major European soccer leagues: Bundesliga, Italian Lega, Spanish Liga, and Premier League. We compare our results with the classical analysis of sport ranking based on measures of competitive balance.
The oxygen sensitivity/compatibility ranking of several materials by different test methods
NASA Technical Reports Server (NTRS)
Lockhart, Billy J.; Bryan, Coleman J.; Hampton, Michael D.
1989-01-01
Eleven materials were evaluated for oxygen compatibility using the following test methods: heat of combustion (ASTM D 2015), liquid oxygen impact (ASTM D 2512), pneumatic impact (ASTM G 74), gaseous mechanical impact (ASTM G 86), autogenous ignition temperature by pressurized differential scanning calorimeter, and the determination of the 50 percent reaction level in liquid oxygen using silicon carbide as a reaction enhancer. The eleven materials evaluated were: Teflon TFE, Vespel SP-21, Krytox 240AC, Viton PLV5010B, Fluorel E2160, Kel F 81, Fluorogold, Fluorogreen E-600, Rulon A, Garlock 8573, nylon 6/6.
MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR
Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo
2015-01-01
Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350
Rasch analysis for the evaluation of rank of student response time in multiple choice examinations.
Thompson, James J; Yang, Tong; Chauvin, Sheila W
2013-01-01
The availability of computerized testing has broadened the scope of person assessment beyond the usual accuracy-ability domain to include response time analyses. Because there are contexts in which speed is important, e.g. medical practice, it is important to develop tools by which individuals can be evaluated for speed. In this paper, the ability of Rasch measurement to convert ordinal nonparametric rankings of speed to measures is examined and compared to similar measures derived from parametric analysis of response times (pace) and semi-parametric logarithmic time-scaling procedures. Assuming that similar spans of the measures were used, non-parametric methods of raw ranking or percentile-ranking of persons by questions gave statistically acceptable person estimates of speed virtually identical to the parametric or semi-parametric methods. Because no assumptions were made about the underlying time distributions with ranking, generality of conclusions was enhanced. The main drawbacks of the non-parametric ranking procedures were the lack of information on question duration and the overall assignment by the model of variance to the person by question interaction. PMID:24064578
Network tuned multiple rank aggregation and applications to gene ranking
2015-01-01
With the development of various high throughput technologies and analysis methods, researchers can study different aspects of a biological phenomenon simultaneously or one aspect repeatedly with different experimental techniques and analysis methods. The output from each study is a rank list of components of interest. Aggregation of the rank lists of components, such as proteins, genes and single nucleotide variants (SNV), produced by these experiments has been proven to be helpful in both filtering the noise and bringing forth a more complete understanding of the biological problems. Current available rank aggregation methods do not consider the network information that has been observed to provide vital contributions in many data integration studies. We developed network tuned rank aggregation methods incorporating network information and demonstrated its superior performance over aggregation methods without network information. The methods are tested on predicting the Gene Ontology function of yeast proteins. We validate the methods using combinations of three gene expression data sets and three protein interaction networks as well as an integrated network by combining the three networks. Results show that the aggregated rank lists are more meaningful if protein interaction network is incorporated. Among the methods compared, CGI_RRA and CGI_Endeavour, which integrate rank lists with networks using CGI [1] followed by rank aggregation using either robust rank aggregation (RRA) [2] or Endeavour [3] perform the best. Finally, we use the methods to locate target genes of transcription factors. PMID:25708095
Binorm-a fortran subroutine to calculate the percentiles of a standardized binormal distribution
McCammon, R.B.
1977-01-01
BINORM is a FORTRAN subroutine for calculating the percentiles of a standardized binormal distribution. By using a linear transformation, the percentiles of a binormal distribution can be obtained. The percentiles of a binormal distribution are useful for plotting purposes, for establishing confidence intervals, and for sampling from a mixed population that consists of two normal distributions. ?? 1977.
SibRank: Signed bipartite network analysis for neighbor-based collaborative ranking
NASA Astrophysics Data System (ADS)
Shams, Bita; Haratizadeh, Saman
2016-09-01
Collaborative ranking is an emerging field of recommender systems that utilizes users' preference data rather than rating values. Unfortunately, neighbor-based collaborative ranking has gained little attention despite its more flexibility and justifiability. This paper proposes a novel framework, called SibRank that seeks to improve the state of the art neighbor-based collaborative ranking methods. SibRank represents users' preferences as a signed bipartite network, and finds similar users, through a novel personalized ranking algorithm in signed networks.
When Does Rank(ABC)= Rank(AB) + Rank(BC) - Rank(B) Hold?
ERIC Educational Resources Information Center
Tian, Yongge; Styan, George P. H.
2002-01-01
The well-known Frobenius rank inequality established by Frobenius in 1911 states that the rank of the product ABC of three matrices satisfies the inequality rank(ABC) [greater than or equal]rank(AB) + rank(BC) - rank(B) A new necessary and sufficient condition for equality to hold is presented and then some interesting consequences and…
Ensemble hydrological prediction of streamflow percentile at ungauged basins in Pakistan
NASA Astrophysics Data System (ADS)
Waseem, Muhammad; Ajmal, Muhammad; Kim, Tae-Woong
2015-06-01
Streamflow records with sufficient spatial and temporal coverage at the site of interest are usually scarce in Pakistan. As an alternative, various regional methods have been frequently adopted to derive hydrological information, which in essence attempt to transfer hydrological information from gauged to ungauged catchments. In this study, a new concept of ensemble hydrological prediction (EHP) was introduced which is an improved regional method for hydrological prediction at ungauged sites. It was mainly based on the performance weights (triple-connection weights (TCW)) derived from Nash Sutcliffe efficiency (NSE) and hydrological variable (here percentiles) calculated from three traditional regional transfer methods (RTMs) with suitable modification (i.e., three-step drainage area ratio (DAR) method, inverse distance weighting (IDW) method, and three-step regional regression analysis (RRA)). The overall results indicated that the proposed EHP method was robust for estimating hydrological percentiles at ungauged sites as compared to traditional individual RTMs. The comparative study based on NSE, percent bias (PBIAS) and the relative error (RE) as performance criteria resulted that the EHP is a constructive alternative for hydrological prediction of ungauged basins.
NASA Astrophysics Data System (ADS)
Zamri, Nurnadiah; Abdullah, Lazim
2014-07-01
A linguistic data is a variable whose value is naturally language phase in dealing with too complex situation to be described properly in conventional quantitative expressions. However, all the past researchers on linguistic variables used positive fuzzy numbers in expressing meaning of symbolic word. It seems that positive and negative numbers were never put concurrently in defining linguistic variables. Accordingly, we intend to construct a new positive and negative linguistic variable in interval type-2 fuzzy entropy weight for interval type-2 fuzzy TOPSIS (IT2 FTOPSIS). This paper uses a new linguistic variable in interval type-2 fuzzy entropy weight to capture the problems on reducing number of road accidents due to all the previously mentioned methods had no discussion about ranking of factors associated with road accidents. Specifically the objective of this paper is to establish rankings of the selected factors associated with road accidents using a new positive and negative linguistic variable and interval type-2 fuzzy entropy weight in interval type-2 fuzzy TOPSIS. This new method is hoped can produce an optimal preference ranking of alternatives in accordance with a set of criterion wise ranking in selection of causes that lead to road accidents. The proposed method produces actionable results that laid the decision-making process. Besides, it does not require a complicated computation procedure and will be beneficial to decision analysis.
Physical Fitness Percentiles of German Children Aged 9–12 Years: Findings from a Longitudinal Study
Golle, Kathleen; Muehlbauer, Thomas; Wick, Ditmar; Granacher, Urs
2015-01-01
Background Generating percentile values is helpful for the identification of children with specific fitness characteristics (i.e., low or high fitness level) to set appropriate fitness goals (i.e., fitness/health promotion and/or long-term youth athlete development). Thus, the aim of this longitudinal study was to assess physical fitness development in healthy children aged 9–12 years and to compute sex- and age-specific percentile values. Methods Two-hundred and forty children (88 girls, 152 boys) participated in this study and were tested for their physical fitness. Physical fitness was assessed using the 50-m sprint test (i.e., speed), the 1-kg ball push test, the triple hop test (i.e., upper- and lower- extremity muscular power), the stand-and-reach test (i.e., flexibility), the star run test (i.e., agility), and the 9-min run test (i.e., endurance). Age- and sex-specific percentile values (i.e., P10 to P90) were generated using the Lambda, Mu, and Sigma method. Adjusted (for change in body weight, height, and baseline performance) age- and sex-differences as well as the interactions thereof were expressed by calculating effect sizes (Cohen’s d). Results Significant main effects of Age were detected for all physical fitness tests (d = 0.40–1.34), whereas significant main effects of Sex were found for upper-extremity muscular power (d = 0.55), flexibility (d = 0.81), agility (d = 0.44), and endurance (d = 0.32) only. Further, significant Sex by Age interactions were observed for upper-extremity muscular power (d = 0.36), flexibility (d = 0.61), and agility (d = 0.27) in favor of girls. Both, linear and curvilinear shaped curves were found for percentile values across the fitness tests. Accelerated (curvilinear) improvements were observed for upper-extremity muscular power (boys: 10–11 yrs; girls: 9–11 yrs), agility (boys: 9–10 yrs; girls: 9–11 yrs), and endurance (boys: 9–10 yrs; girls: 9–10 yrs). Tabulated percentiles for the 9-min run test
Pay for Percentile. NBER Working Paper No. 17194
ERIC Educational Resources Information Center
Barlevy, Gadi; Neal, Derek
2011-01-01
We analyze an incentive pay scheme for educators that links educator compensation to the ranks of their students within appropriately defined comparison sets, and we show that under certain conditions this scheme induces teachers to allocate socially optimal levels of effort to all students. Moreover, because this scheme employs only ordinal…
Beyond Low Rank + Sparse: Multiscale Low Rank Matrix Decomposition
NASA Astrophysics Data System (ADS)
Ong, Frank; Lustig, Michael
2016-06-01
Low rank methods allow us to capture globally correlated components within matrices. The recent low rank + sparse decomposition further enables us to extract sparse entries along with the globally correlated components. In this paper, we present a natural generalization and consider the decomposition of matrices into components of multiple scales. Such decomposition is well motivated in practice as data matrices often exhibit local correlations in multiple scales. Concretely, we propose a multi-scale low rank modeling that represents a data matrix as a sum of block-wise low rank matrices with increasing scales of block sizes. We then consider the inverse problem of decomposing the data matrix into its multi-scale low rank components and approach the problem via a convex formulation. Theoretically, we show that under an incoherence condition, the convex program recovers the multi-scale low rank components exactly. Practically, we provide guidance on selecting the regularization parameters and incorporate cycle spinning to reduce blocking artifacts. Experimentally, we show that the multi-scale low rank decomposition provides a more intuitive decomposition than conventional low rank methods and demonstrate its effectiveness in four applications, including illumination normalization for face images, motion separation for surveillance videos, multi-scale modeling of the dynamic contrast enhanced magnetic resonance imaging and collaborative filtering exploiting age information.
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-01-01
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931
Zhang, Zutao; Li, Yanjun; Wang, Fubing; Meng, Guanjun; Salman, Waleed; Saleem, Layth; Zhang, Xiaoliang; Wang, Chunbai; Hu, Guangdi; Liu, Yugang
2016-01-01
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. PMID:27294931
Louis, Thomas A.; Ruczinski, Ingo
2009-01-01
Summary Simulation-based assessment is a popular and frequently necessary approach to evaluation of statistical procedures. Sometimes overlooked is the ability to take advantage of underlying mathematical relations and we focus on this aspect. We show how to take advantage of large-sample theory when conducting a simulation using the analysis of genomic data as a motivating example. The approach uses convergence results to provide an approximation to smaller-sample results, results that are available only by simulation. We consider evaluating and comparing a variety of ranking-based methods for identifying the most highly associated SNPs in a genome-wide association study, derive integral equation representations of the pre-posterior distribution of percentiles produced by three ranking methods, and provide examples comparing performance. These results are of interest in their own right and set the framework for a more extensive set of comparisons. PMID:20131327
Paddock, Susan M.; Louis, Thomas A.
2010-01-01
Summary Hierarchical models are widely-used to characterize the performance of individual healthcare providers. However, little attention has been devoted to system-wide performance evaluations, the goals of which include identifying extreme (e.g., top 10%) provider performance and developing statistical benchmarks to define high-quality care. Obtaining optimal estimates of these quantities requires estimating the empirical distribution function (EDF) of provider-specific parameters that generate the dataset under consideration. However, the difficulty of obtaining uncertainty bounds for a square-error loss minimizing EDF estimate has hindered its use in system-wide performance evaluations. We therefore develop and study a percentile-based EDF estimate for univariate provider-specific parameters. We compute order statistics of samples drawn from the posterior distribution of provider-specific parameters to obtain relevant uncertainty assessments of an EDF estimate and its features, such as thresholds and percentiles. We apply our method to data from the Medicare End Stage Renal Disease (ESRD) Program, a health insurance program for people with irreversible kidney failure. We highlight the risk of misclassifying providers as exceptionally good or poor performers when uncertainty in statistical benchmark estimates is ignored. Given the high stakes of performance evaluations, statistical benchmarks should be accompanied by precision estimates. PMID:21918583
Gómez-Campos, Rossana; Lee Andruske, Cinthya; Hespanhol, Jefferson; Sulla Torres, Jose; Arruda, Miguel; Luarte-Rocha, Cristian; Cossio-Bolaños, Marco Antonio
2015-01-01
The measurement of waist circumference (WC) is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a) compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b) propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old) were assessed. Weight, height, body mass index (BMI), and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders. PMID:26184250
Gómez-Campos, Rossana; Andruske, Cinthya Lee; Hespanhol, Jefferson; Torres, Jose Sulla; Arruda, Miguel; Luarte-Rocha, Cristian; Cossio-Bolaños, Marco Antonio
2015-07-01
The measurement of waist circumference (WC) is considered to be an important means to control overweight and obesity in children and adolescents. The objectives of the study were to (a) compare the WC measurements of Chilean students with the international CDC-2012 standard and other international standards, and (b) propose a specific measurement value for the WC of Chilean students based on age and sex. A total of 3892 students (6 to 18 years old) were assessed. Weight, height, body mass index (BMI), and WC were measured. WC was compared with the CDC-2012 international standard. Percentiles were constructed based on the LMS method. Chilean males had a greater WC during infancy. Subsequently, in late adolescence, males showed values lower than those of the international standards. Chilean females demonstrated values similar to the standards until the age of 12. Subsequently, females showed lower values. The 85th and 95th percentiles were adopted as cutoff points for evaluating overweight and obesity based on age and sex. The WC of Chilean students differs from the CDC-2012 curves. The regional norms proposed are a means to identify children and adolescents with a high risk of suffering from overweight and obesity disorders. PMID:26184250
Restrepo, Guillermo; Weckert, Monika; Brüggemann, Rainer; Gerstmann, Silke; Frank, Hartmut
2008-04-15
Environmental ranking of refrigerants is of need in many instances. The aim is to assess the relative environmental hazard posed by 40 refrigerants, including those used in the past, those presently used, and some proposed substitutes. Ranking is based upon ozone depletion potential, global warming potential, and atmospheric lifetime and is achieved by applying the Hasse diagram technique, a mathematical method that allows us to assess order relationships of chemicals. The refrigerants are divided into 13 classes, of which the chlorofluorocarbons, hydrofluorocarbons, hydrochlorofluorocarbons, hydrofluoroethers, and hydrocarbons contain the largest number of single substances. The dominance degree, a method for measuring order relationships among classes, is discussed and applied to the 13 refrigerant classes. The results show that some hydrofluoroethers are as problematic as the hydrofluorocarbons. Hydrocarbons and ammonia are the least problematic refrigerants with respect to the three environmental properties. PMID:18497145
NASA Astrophysics Data System (ADS)
Adamowski, J. F.; Quilty, J.; Khalil, B.; Rathinasamy, M.
2014-12-01
This paper explores forecasting short-term urban water demand (UWD) (using only historical records) through a variety of machine learning techniques coupled with a novel input variable selection (IVS) procedure. The proposed IVS technique termed, bootstrap rank-ordered conditional mutual information for real-valued signals (brCMIr), is multivariate, nonlinear, nonparametric, and probabilistic. The brCMIr method was tested in a case study using water demand time series for two urban water supply system pressure zones in Ottawa, Canada to select the most important historical records for use with each machine learning technique in order to generate forecasts of average and peak UWD for the respective pressure zones at lead times of 1, 3, and 7 days ahead. All lead time forecasts are computed using Artificial Neural Networks (ANN) as the base model, and are compared with Least Squares Support Vector Regression (LSSVR), as well as a novel machine learning method for UWD forecasting: the Extreme Learning Machine (ELM). Results from one-way analysis of variance (ANOVA) and Tukey Honesty Significance Difference (HSD) tests indicate that the LSSVR and ELM models are the best machine learning techniques to pair with brCMIr. However, ELM has significant computational advantages over LSSVR (and ANN) and provides a new and promising technique to explore in UWD forecasting.
Impact of Doximity Residency Rankings on Emergency Medicine Applicant Rank Lists
Peterson, William J.; Hopson, Laura R.; Khandelwal, Sorabh; White, Melissa; Gallahue, Fiona E.; Burkhardt, John; Rolston, Aimee M.; Santen, Sally A.
2016-01-01
Introduction This study investigates the impact of the Doximity rankings on the rank list choices made by residency applicants in emergency medicine (EM). Methods We sent an 11-item survey by email to all students who applied to EM residency programs at four different institutions representing diverse geographical regions. Students were asked questions about their perception of Doximity rankings and how it may have impacted their rank list decisions. Results Response rate was 58% of 1,372 opened electronic surveys. This study found that a majority of medical students applying to residency in EM were aware of the Doximity rankings prior to submitting rank lists (67%). One-quarter of these applicants changed the number of programs and ranks of those programs when completing their rank list based on the Doximity rankings (26%). Though the absolute number of programs changed on the rank lists was small, the results demonstrate that the EM Doximity rankings impact applicant decision-making in ranking residency programs. Conclusion While applicants do not find the Doximity rankings to be important compared to other factors in the application process, the Doximity rankings result in a small change in residency applicant ranking behavior. This unvalidated ranking, based principally on reputational data rather than objective outcome criteria, thus has the potential to be detrimental to students, programs, and the public. We feel it important for specialties to develop consensus around measurable training outcomes and provide freely accessible metrics for candidate education. PMID:27330670
NASA Astrophysics Data System (ADS)
Sajjadi, S. Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila
2016-03-01
A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained.
Sajjadi, S Maryam; Abdollahi, Hamid; Rahmanian, Reza; Bagheri, Leila
2016-03-01
A rapid, simple and inexpensive method using fluorescence spectroscopy coupled with multi-way methods for the determination of aflatoxins B1 and B2 in peanuts has been developed. In this method, aflatoxins are extracted with a mixture of water and methanol (90:10), and then monitored by fluorescence spectroscopy producing EEMs. Although the combination of EEMs and multi-way methods is commonly used to determine analytes in complex chemical systems with unknown interference(s), rank overlap problem in excitation and emission profiles may restrain the application of this strategy. If there is rank overlap in one mode, there are several three-way algorithms such as PARAFAC under some constraints that can resolve this kind of data successfully. However, the analysis of EEM data is impossible when some species have rank overlap in both modes because the information of the data matrix is equivalent to a zero-order data for that species, which is the case in our study. Aflatoxins B1 and B2 have the same shape of spectral profiles in both excitation and emission modes and we propose creating a third order data for each sample using solvent as a new additional selectivity mode. This third order data, in turn, converted to the second order data by augmentation, a fact which resurrects the second order advantage in original EEMs. The three-way data is constructed by stacking augmented data in the third way, and then analyzed by two powerful second order calibration methods (BLLS-RBL and PARAFAC) to quantify the analytes in four kinds of peanut samples. The results of both methods are in good agreement and reasonable recoveries are obtained. PMID:26650793
ERIC Educational Resources Information Center
Colorado Department of Education, 2013
2013-01-01
This report examines the relationship between socioeconomic status, as defined by a free-and-reduced lunch proxy variable, and student growth percentiles by elementary, middle, and high school grade levels for math, reading, and writing. Comparisons were made between median growth percentiles for each educational level by free and reduced lunch…
Sanders, Anthony P; Brannon, Rebecca M
2014-02-01
This research has developed a novel test method for evaluating the wear resistance of ceramic materials under severe contact stresses simulating edge loading in prosthetic hip bearings. Simply shaped test specimens - a cylinder and a spheroid - were designed as surrogates for an edge-loaded, head/liner implant pair. Equivalency of the simpler specimens was assured in the sense that their theoretical contact dimensions and pressures were identical, according to Hertzian contact theory, to those of the head/liner pair. The surrogates were fabricated in three ceramic materials: Al2 O3 , zirconia-toughened alumina (ZTA), and ZrO2 . They were mated in three different material pairs and reciprocated under a 200 N normal contact force for 1000-2000 cycles, which created small (<1 mm(2) ) wear scars. The three material pairs were ranked by their wear resistance, quantified by the volume of abraded material measured using an interferometer. Similar tests were performed on edge-loaded hip implants in the same material pairs. The surrogates replicated the wear rankings of their full-scale implant counterparts and mimicked their friction force trends. The results show that a proxy test using simple test specimens can validly rank the wear performance of ceramic materials under severe, edge-loading contact stresses, while replicating the beginning stage of edge-loading wear. This simple wear test is therefore potentially useful for screening and ranking new, prospective materials early in their development, to produce optimized candidates for more complicated full-scale hip simulator wear tests. PMID:23996812
Bigus, Paulina; Tsakovski, Stefan; Simeonov, Vasil; Namieśnik, Jacek; Tobiszewski, Marek
2016-05-01
This study presents an application of the Hasse diagram technique (HDT) as the assessment tool to select the most appropriate analytical procedures according to their greenness or the best analytical performance. The dataset consists of analytical procedures for benzo[a]pyrene determination in sediment samples, which were described by 11 variables concerning their greenness and analytical performance. Two analyses with the HDT were performed-the first one with metrological variables and the second one with "green" variables as input data. Both HDT analyses ranked different analytical procedures as the most valuable, suggesting that green analytical chemistry is not in accordance with metrology when benzo[a]pyrene in sediment samples is determined. The HDT can be used as a good decision support tool to choose the proper analytical procedure concerning green analytical chemistry principles and analytical performance merits. PMID:27038058
TripleRank: Ranking Semantic Web Data by Tensor Decomposition
NASA Astrophysics Data System (ADS)
Franz, Thomas; Schultz, Antje; Sizov, Sergej; Staab, Steffen
The Semantic Web fosters novel applications targeting a more efficient and satisfying exploitation of the data available on the web, e.g. faceted browsing of linked open data. Large amounts and high diversity of knowledge in the Semantic Web pose the challenging question of appropriate relevance ranking for producing fine-grained and rich descriptions of the available data, e.g. to guide the user along most promising knowledge aspects. Existing methods for graph-based authority ranking lack support for fine-grained latent coherence between resources and predicates (i.e. support for link semantics in the linked data model). In this paper, we present TripleRank, a novel approach for faceted authority ranking in the context of RDF knowledge bases. TripleRank captures the additional latent semantics of Semantic Web data by means of statistical methods in order to produce richer descriptions of the available data. We model the Semantic Web by a 3-dimensional tensor that enables the seamless representation of arbitrary semantic links. For the analysis of that model, we apply the PARAFAC decomposition, which can be seen as a multi-modal counterpart to Web authority ranking with HITS. The result are groupings of resources and predicates that characterize their authority and navigational (hub) properties with respect to identified topics. We have applied TripleRank to multiple data sets from the linked open data community and gathered encouraging feedback in a user evaluation where TripleRank results have been exploited in a faceted browsing scenario.
Cummins, Niamh Maria; Hannigan, Ailish; Shannon, Bill; Dunne, Colum; Cullen, Walter
2013-01-01
Background The Internet is a widely used source of information for patients searching for medical/health care information. While many studies have assessed existing medical/health care information on the Internet, relatively few have examined methods for design and delivery of such websites, particularly those aimed at the general public. Objective This study describes a method of evaluating material for new medical/health care websites, or for assessing those already in existence, which is correlated with higher rankings on Google's Search Engine Results Pages (SERPs). Methods A website quality assessment (WQA) tool was developed using criteria related to the quality of the information to be contained in the website in addition to an assessment of the readability of the text. This was retrospectively applied to assess existing websites that provide information about generic medicines. The reproducibility of the WQA tool and its predictive validity were assessed in this study. Results The WQA tool demonstrated very high reproducibility (intraclass correlation coefficient=0.95) between 2 independent users. A moderate to strong correlation was found between WQA scores and rankings on Google SERPs. Analogous correlations were seen between rankings and readability of websites as determined by Flesch Reading Ease and Flesch-Kincaid Grade Level scores. Conclusions The use of the WQA tool developed in this study is recommended as part of the design phase of a medical or health care information provision website, along with assessment of readability of the material to be used. This may ensure that the website performs better on Google searches. The tool can also be used retrospectively to make improvements to existing websites, thus, potentially enabling better Google search result positions without incurring the costs associated with Search Engine Optimization (SEO) professionals or paid promotion. PMID:23981848
Lee, Ching-Pei; Lin, Chih-Jen
2014-04-01
Linear rankSVM is one of the widely used methods for learning to rank. Although its performance may be inferior to nonlinear methods such as kernel rankSVM and gradient boosting decision trees, linear rankSVM is useful to quickly produce a baseline model. Furthermore, following its recent development for classification, linear rankSVM may give competitive performance for large and sparse data. A great deal of works have studied linear rankSVM. The focus is on the computational efficiency when the number of preference pairs is large. In this letter, we systematically study existing works, discuss their advantages and disadvantages, and propose an efficient algorithm. We discuss different implementation issues and extensions with detailed experiments. Finally, we develop a robust linear rankSVM tool for public use. PMID:24479776
Alharthi, Hana; Sultana, Nahid; Al-Amoudi, Amjaad; Basudan, Afrah
2015-01-01
Pharmacy barcode scanning is used to reduce errors during the medication dispensing process. However, this technology has rarely been used in hospital pharmacies in Saudi Arabia. This article describes the barriers to successful implementation of a barcode scanning system in Saudi Arabia. A literature review was conducted to identify the relevant critical success factors (CSFs) for a successful dispensing barcode system implementation. Twenty-eight pharmacists from a local hospital in Saudi Arabia were interviewed to obtain their perception of these CSFs. In this study, planning (process flow issues and training requirements), resistance (fear of change, communication issues, and negative perceptions about technology), and technology (software, hardware, and vendor support) were identified as the main barriers. The analytic hierarchy process (AHP), one of the most widely used tools for decision making in the presence of multiple criteria, was used to compare and rank these identified CSFs. The results of this study suggest that resistance barriers have a greater impact than planning and technology barriers. In particular, fear of change is the most critical factor, and training is the least critical factor. PMID:26807079
Breitling, Rainer; Armengaud, Patrick; Amtmann, Anna; Herzyk, Pawel
2004-08-27
One of the main objectives in the analysis of microarray experiments is the identification of genes that are differentially expressed under two experimental conditions. This task is complicated by the noisiness of the data and the large number of genes that are examined simultaneously. Here, we present a novel technique for identifying differentially expressed genes that does not originate from a sophisticated statistical model but rather from an analysis of biological reasoning. The new technique, which is based on calculating rank products (RP) from replicate experiments, is fast and simple. At the same time, it provides a straightforward and statistically stringent way to determine the significance level for each gene and allows for the flexible control of the false-detection rate and familywise error rate in the multiple testing situation of a microarray experiment. We use the RP technique on three biological data sets and show that in each case it performs more reliably and consistently than the non-parametric t-test variant implemented in Tusher et al.'s significance analysis of microarrays (SAM). We also show that the RP results are reliable in highly noisy data. An analysis of the physiological function of the identified genes indicates that the RP approach is powerful for identifying biologically relevant expression changes. In addition, using RP can lead to a sharp reduction in the number of replicate experiments needed to obtain reproducible results. PMID:15327980
Alharthi, Hana; Sultana, Nahid; Al-amoudi, Amjaad; Basudan, Afrah
2015-01-01
Pharmacy barcode scanning is used to reduce errors during the medication dispensing process. However, this technology has rarely been used in hospital pharmacies in Saudi Arabia. This article describes the barriers to successful implementation of a barcode scanning system in Saudi Arabia. A literature review was conducted to identify the relevant critical success factors (CSFs) for a successful dispensing barcode system implementation. Twenty-eight pharmacists from a local hospital in Saudi Arabia were interviewed to obtain their perception of these CSFs. In this study, planning (process flow issues and training requirements), resistance (fear of change, communication issues, and negative perceptions about technology), and technology (software, hardware, and vendor support) were identified as the main barriers. The analytic hierarchy process (AHP), one of the most widely used tools for decision making in the presence of multiple criteria, was used to compare and rank these identified CSFs. The results of this study suggest that resistance barriers have a greater impact than planning and technology barriers. In particular, fear of change is the most critical factor, and training is the least critical factor. PMID:26807079
ERIC Educational Resources Information Center
Machung, Anne
1998-01-01
The "U.S. News and World Report" rankings of colleges do not affect institutions equally; the schools impacted most are those that have the most to lose because they benefit from, even rely on, the rankings for prestige and visibility. The magazine relies on the rankings for substantial sales revenues, and has garnered considerable power within…
ERIC Educational Resources Information Center
Carpineto, Claudio; Romano, Giovanni
2000-01-01
Presents an approach to document ranking that explicitly addresses the word mismatch problem between a query and a document by exploiting interdocument similarity information, based on the theory of concept lattices. Compares information retrieval using concept lattice-based ranking (CLR) to BMR (best-match ranking) and HCR (hierarchical…
Vickers, Linda
2016-05-01
The U.S. Department of Energy Standard 3009-2014 requires one of two methods to determine the simple Gaussian relative concentration (X/Q) of pollutant at plume centerline downwind to a receptor for a 2-h exposure duration from a ground-level release (i.e., less than 10 m height) which are (1) the 99.5th percentile X/Q for the directionally-dependent method and (2) the 95th percentile X/Q for the directionally-independent method. This paper describes how to determine the simple Gaussian 99.5th percentile X/Q for the directionally-dependent method using an electronic spreadsheet. Refer to a previous paper to determine the simple Gaussian 95th percentile X/Q for the directionally-independent method using an electronic spreadsheet (Vickers 2015). The method described herein is simple, quick, accurate, and transparent because all of the data, calculations, and results are visible for validation and verification. PMID:27023153
Shaping in the 21st century: Moving percentile schedules into applied settings
Galbicka, Gregory
1994-01-01
The present paper provides a primer on percentile reinforcement schedules, which have been used for two decades to study response differentiation and shaping in the laboratory. Arranged in applied settings, percentile procedures could be used to specify response criteria, standardizing treatment across subjects, trainers, and times to provide a more consistent training environment while maintaining the sensitivity to the individual's repertoire that is the hallmark of shaping. Percentile schedules are also valuable tools in analyzing the variables of which responding is a function, both inside and outside the laboratory. Finally, by formalizing the rules of shaping, percentile schedules provide a useful heuristic of the processes involved in shaping behavior, even for those situations that may not easily permit their implementation. As such, they may help further sensitize trainers and researchers alike to variables of critical importance in behavior change. ImagesFigure 6 PMID:16795849
NASA Astrophysics Data System (ADS)
Brodie, Ross S.; Hostetler, Stephen; Slatter, Emily
2008-01-01
SummaryA frequency analysis approach was used to investigate the hydraulic connectivity between streams and aquifers, by comparing daily percentiles of streamflow and rainfall. Three Australian streams were examined - a dominantly gaining stream (Wilsons River, NSW), a dominantly gaining stream modified by significant water extraction (Ovens River, Victoria) and a dominantly losing stream (Mooki River, NSW). For the gaining stream examples, a lag is observed between the seasonal peak in the low-flow percentile curves and the seasonal peak in the daily rainfall percentile curve. Cross-correlation was used to calculate the time-shift that provides the best fit between the streamflow and rainfall percentile curves. There is a good correlation ( r2 > 0.8) between the reference rainfall percentile curve and the shifted streamflow percentile curves for gaining streams. The lags evident between the rainfall and streamflow percentile curves represent the processes of first replenishing catchment storages (such as soil moisture and groundwater) and subsequent release to the stream. This is largely a function of catchment hydrogeology as well as climate, notably the magnitude and regularity of rainfall events. Catchment size is not a controlling factor. Analysis of these lags provides insights into the dynamics of groundwater recharge, storage and release. Changes in the lag times over the flow percentiles can reflect changes in the dominant catchment storage contributing to streamflow. For the Wilsons River, the contribution from a groundwater system with longer flow paths increases at lower flow percentiles. This can be critical when protecting minimum streamflows, as near-stream groundwater flow may not be the only determining factor. The impact of water extraction can be recognised in this analysis. For the Ovens River, streamflow deficits relative to the rainfall percentile curve correspond to the summer period of high irrigation demand. Such a deficit was also observed
NASA Astrophysics Data System (ADS)
Huang, Wei; Wen, Qiao-Yan; Liu, Bin; Su, Qi; Qin, Su-Juan; Gao, Fei
2014-03-01
Anonymous ranking is a kind of privacy-preserving ranking whereby each of the involved participants can correctly and anonymously get the rankings of his data. It can be utilized to solve many practical problems, such as anonymously ranking the students' exam scores. We investigate the issue of how quantum mechanics can be of use in maintaining the anonymity of the participants in multiparty ranking and present a series of quantum anonymous multiparty, multidata ranking protocols. In each of these protocols, a participant can get the correct rankings of his data and nobody else can match the identity to his data. Furthermore, the security of these protocols with respect to different kinds of attacks is proved.
Ranking species in mutualistic networks.
Domínguez-García, Virginia; Muñoz, Miguel A
2015-01-01
Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic "nested" structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm--similar in spirit to Google's PageRank but with a built-in non-linearity--here we propose a method which--by exploiting their nested architecture--allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made. PMID:25640575
Ranking species in mutualistic networks
Domínguez-García, Virginia; Muñoz, Miguel A.
2015-01-01
Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic “nested” structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm –similar in spirit to Google's PageRank but with a built-in non-linearity– here we propose a method which –by exploiting their nested architecture– allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made. PMID:25640575
Ranking species in mutualistic networks
NASA Astrophysics Data System (ADS)
Domínguez-García, Virginia; Muñoz, Miguel A.
2015-02-01
Understanding the architectural subtleties of ecological networks, believed to confer them enhanced stability and robustness, is a subject of outmost relevance. Mutualistic interactions have been profusely studied and their corresponding bipartite networks, such as plant-pollinator networks, have been reported to exhibit a characteristic ``nested'' structure. Assessing the importance of any given species in mutualistic networks is a key task when evaluating extinction risks and possible cascade effects. Inspired in a recently introduced algorithm -similar in spirit to Google's PageRank but with a built-in non-linearity- here we propose a method which -by exploiting their nested architecture- allows us to derive a sound ranking of species importance in mutualistic networks. This method clearly outperforms other existing ranking schemes and can become very useful for ecosystem management and biodiversity preservation, where decisions on what aspects of ecosystems to explicitly protect need to be made.
Vuong, Quan Van; Nguyen, Tin Trung; Li, Mai Suan
2015-12-28
In this paper we present a new method for finding the optimal path for pulling a ligand from the binding pocket using steered molecular dynamics (SMD). Scoring function is defined as the steric hindrance caused by a receptor to ligand movement. Then the optimal path corresponds to the minimum of this scoring function. We call the new method MSH (Minimal Steric Hindrance). Contrary to existing navigation methods, our approach takes into account the geometry of the ligand while other methods including CAVER only consider the ligand as a sphere with a given radius. Using three different target + receptor sets, we have shown that the rupture force Fmax and nonequilibrium work Wpull obtained based on the MSH method show a much higher correlation with experimental data on binding free energies compared to CAVER. Furthermore, Wpull was found to be a better indicator for binding affinity than Fmax. Thus, the new MSH method is a reliable tool for obtaining the best direction for ligand exiting from the binding site. Its combination with the standard SMD technique can provide reasonable results for ranking binding affinities using Wpull as a scoring function. PMID:26595261
Ranking chemicals based on chronic toxicity data.
De Rosa, C T; Stara, J F; Durkin, P R
1985-12-01
During the past 3 years, EPA's ECAO/Cincinnati has developed a method to rank chemicals based on chronic toxicity data. This ranking system reflects two primary attributes of every chemical: the minimum effective dose and the type of effect elicited at that dose. The purpose for developing this chronic toxicity ranking system was to provide the EPA with the technical background required to adjust the RQs of hazardous substances designated in Section 101(14) of CERCLA or "Superfund." This approach may have applications to other areas of interest to the EPA and other regulatory agencies where ranking of chemicals based on chronic toxicity is desired. PMID:3843499
Tashobya, Christine K; Dubourg, Dominique; Ssengooba, Freddie; Speybroeck, Niko; Macq, Jean; Criel, Bart
2016-03-01
In 2003, the Uganda Ministry of Health introduced the district league table for district health system performance assessment. The league table presents district performance against a number of input, process and output indicators and a composite index to rank districts. This study explores the use of hierarchical cluster analysis for analysing and presenting district health systems performance data and compares this approach with the use of the league table in Uganda. Ministry of Health and district plans and reports, and published documents were used to provide information on the development and utilization of the Uganda district league table. Quantitative data were accessed from the Ministry of Health databases. Statistical analysis using SPSS version 20 and hierarchical cluster analysis, utilizing Wards' method was used. The hierarchical cluster analysis was conducted on the basis of seven clusters determined for each year from 2003 to 2010, ranging from a cluster of good through moderate-to-poor performers. The characteristics and membership of clusters varied from year to year and were determined by the identity and magnitude of performance of the individual variables. Criticisms of the league table include: perceived unfairness, as it did not take into consideration district peculiarities; and being oversummarized and not adequately informative. Clustering organizes the many data points into clusters of similar entities according to an agreed set of indicators and can provide the beginning point for identifying factors behind the observed performance of districts. Although league table ranking emphasize summation and external control, clustering has the potential to encourage a formative, learning approach. More research is required to shed more light on factors behind observed performance of the different clusters. Other countries especially low-income countries that share many similarities with Uganda can learn from these experiences. PMID:26024882
Tashobya, Christine K; Dubourg, Dominique; Ssengooba, Freddie; Speybroeck, Niko; Macq, Jean; Criel, Bart
2016-01-01
In 2003, the Uganda Ministry of Health introduced the district league table for district health system performance assessment. The league table presents district performance against a number of input, process and output indicators and a composite index to rank districts. This study explores the use of hierarchical cluster analysis for analysing and presenting district health systems performance data and compares this approach with the use of the league table in Uganda. Ministry of Health and district plans and reports, and published documents were used to provide information on the development and utilization of the Uganda district league table. Quantitative data were accessed from the Ministry of Health databases. Statistical analysis using SPSS version 20 and hierarchical cluster analysis, utilizing Wards’ method was used. The hierarchical cluster analysis was conducted on the basis of seven clusters determined for each year from 2003 to 2010, ranging from a cluster of good through moderate-to-poor performers. The characteristics and membership of clusters varied from year to year and were determined by the identity and magnitude of performance of the individual variables. Criticisms of the league table include: perceived unfairness, as it did not take into consideration district peculiarities; and being oversummarized and not adequately informative. Clustering organizes the many data points into clusters of similar entities according to an agreed set of indicators and can provide the beginning point for identifying factors behind the observed performance of districts. Although league table ranking emphasize summation and external control, clustering has the potential to encourage a formative, learning approach. More research is required to shed more light on factors behind observed performance of the different clusters. Other countries especially low-income countries that share many similarities with Uganda can learn from these experiences. PMID:26024882
Low-Rank Preserving Projections.
Lu, Yuwu; Lai, Zhihui; Xu, Yong; Li, Xuelong; Zhang, David; Yuan, Chun
2016-08-01
As one of the most popular dimensionality reduction techniques, locality preserving projections (LPP) has been widely used in computer vision and pattern recognition. However, in practical applications, data is always corrupted by noises. For the corrupted data, samples from the same class may not be distributed in the nearest area, thus LPP may lose its effectiveness. In this paper, it is assumed that data is grossly corrupted and the noise matrix is sparse. Based on these assumptions, we propose a novel dimensionality reduction method, named low-rank preserving projections (LRPP) for image classification. LRPP learns a low-rank weight matrix by projecting the data on a low-dimensional subspace. We use the L21 norm as a sparse constraint on the noise matrix and the nuclear norm as a low-rank constraint on the weight matrix. LRPP keeps the global structure of the data during the dimensionality reduction procedure and the learned low rank weight matrix can reduce the disturbance of noises in the data. LRPP can learn a robust subspace from the corrupted data. To verify the performance of LRPP in image dimensionality reduction and classification, we compare LRPP with the state-of-the-art dimensionality reduction methods. The experimental results show the effectiveness and the feasibility of the proposed method with encouraging results. PMID:26277014
ERIC Educational Resources Information Center
Moffat, Alistair; And Others
1994-01-01
Describes an approximate document ranking process that uses a compact array of in-memory, low-precision approximations for document length. Combined with another rule for reducing the memory required by partial similarity accumulators, the approximation heuristic allows the ranking of large document collections using less than one byte of memory…
ERIC Educational Resources Information Center
Dobbs, David E.
2012-01-01
This note explains how Emil Artin's proof that row rank equals column rank for a matrix with entries in a field leads naturally to the formula for the nullity of a matrix and also to an algorithm for solving any system of linear equations in any number of variables. This material could be used in any course on matrix theory or linear algebra.
ERIC Educational Resources Information Center
Chapman, David W.
2008-01-01
Recently, Samford University was ranked 27th in the nation in a report released by "Forbes" magazine. In this article, the author relates how the people working at Samford University were surprised at its ranking. Although Samford is the largest privately institution in Alabama, its distinguished academic achievements aren't even well-recognized…
Re-Ranking Model Based on Document Clusters.
ERIC Educational Resources Information Center
Lee, Kyung-Soon; Park, Young-Chan; Choi, Key-Sun
2001-01-01
Describes a model of an information retrieval system that is based on a document re-ranking method, using document clusters. Retrieves documents based on the inverted file method, then analyzes the retrieved documents using document clusters and re-ranks them. Shows significant improvements over the method based on similarity search ranking alone.…
A Ranking Approach to Genomic Selection
Blondel, Mathieu; Onogi, Akio; Iwata, Hiroyoshi; Ueda, Naonori
2015-01-01
Background Genomic selection (GS) is a recent selective breeding method which uses predictive models based on whole-genome molecular markers. Until now, existing studies formulated GS as the problem of modeling an individual’s breeding value for a particular trait of interest, i.e., as a regression problem. To assess predictive accuracy of the model, the Pearson correlation between observed and predicted trait values was used. Contributions In this paper, we propose to formulate GS as the problem of ranking individuals according to their breeding value. Our proposed framework allows us to employ machine learning methods for ranking which had previously not been considered in the GS literature. To assess ranking accuracy of a model, we introduce a new measure originating from the information retrieval literature called normalized discounted cumulative gain (NDCG). NDCG rewards more strongly models which assign a high rank to individuals with high breeding value. Therefore, NDCG reflects a prerequisite objective in selective breeding: accurate selection of individuals with high breeding value. Results We conducted a comparison of 10 existing regression methods and 3 new ranking methods on 6 datasets, consisting of 4 plant species and 25 traits. Our experimental results suggest that tree-based ensemble methods including McRank, Random Forests and Gradient Boosting Regression Trees achieve excellent ranking accuracy. RKHS regression and RankSVM also achieve good accuracy when used with an RBF kernel. Traditional regression methods such as Bayesian lasso, wBSR and BayesC were found less suitable for ranking. Pearson correlation was found to correlate poorly with NDCG. Our study suggests two important messages. First, ranking methods are a promising research direction in GS. Second, NDCG can be a useful evaluation measure for GS. PMID:26068103
Afkhami, Abbas; Khajavi, Farzad; Khanmohammadi, Hamid
2009-08-11
The oxidation of the recently synthesized Schiff base 3,6-bis((2-aminoethyl-5-Br-salicyliden)thio)pyridazine (PABST) with hydrogen peroxide was investigated using spectrophotometric studies. The reaction rate order and observed rate constant of the oxidation reaction was obtained in the mixture of N,N-dimethylformamide (DMF):water (30:70, v/v) at pH 10 using multivariate cure resolution alternative least squares (MCR-ALS) method and rank annihilation factor analysis (RAFA). The effective parameters on the oxidation rate constant such as percents of DMF, the effect of transition metals like Cu(2+), Zn(2+), Mn(2+) and Hg(2+) and the presence of surfactants were investigated. The keto-enol equilibria in DMF:water (30:70, v/v) solution at pH 7.6 was also investigated in the presence of surfactants. At concentrations above critical micelle concentration (cmc) of cationic surfactant cetyltrimethylammonium bromide (CTAB), the keto form was the predominant species, while at concentrations above cmc of anionic surfactant sodium dodecyl sulfate (SDS), the enol form was the predominant species. The kinetic reaction order and the rate constant of tautomerization in micellar medium were obtained using MCR-ALS and RAFA. The results obtained by both the methods were in a good agreement with each other. Also the effect of different volume percents of DMF on the rate constant of tautomerization was investigated. The neutral surfactant (Triton X-100) had no effect on tautomerization equilibrium. PMID:19591704
Rasch analysis of rank-ordered data.
Linacre, John M
2006-01-01
Theoretical and practical aspects of several methods for the construction of linear measures from rank-ordered data are presented. The final partial-rankings of 356 professional golfers participating in 47 stroke-play tournaments are used for illustration. The methods include decomposing the rankings into independent paired comparisons without ties, into dependent paired comparisons without ties and into independent paired comparisons with ties. A further method, which is easier to implement, entails modeling each tournament as a partial-credit item in which the rank of each golfer is treated as the observation of a category on a partial-credit rating scale. For the golf data, the partial-credit method yields measures with greater face validity than the paired comparison methods. The methods are implemented with the computer programs FACETS and WINSTEPS. PMID:16385155
AGE AND GENDER SPECIFIC BMI PERCENTILES ARE LIMITED FOR TRACKING THE CHILDHOOD OBESITY EPIDEMIC
Technology Transfer Automated Retrieval System (TEKTRAN)
Purpose: To evaluate pediatric nutrition and physical activity interventions a reliable and feasible way of tracking change in body status is needed. Historically, body mass index (BMI) has been used in adults. BMI percentiles or Z scores, which are theoretically age and gender adjusted, have been...
Problems with Percentiles: Student Growth Scores in New York's Teacher Evaluation System
ERIC Educational Resources Information Center
Patrick, Drew
2016-01-01
New York State has used the Growth Model for Educator Evaluation ratings since the 2011-2012 school year. Since that time, student growth percentiles have been used as the basis for teacher and principal ratings. While a great deal has been written about the use of student test scores to measures educator effectiveness, less attention has been…
Percentile Norms for the AAHPER Cooperative Physical Education Tests. Research Report.
ERIC Educational Resources Information Center
Moodie, Allan G.
Percentile scores for Vancouver students in grades 9, 10, 11 and 12 on the AAHPER Cooperative Physical Education Tests are presented. Two of the six forms of the tests were used in these administrations. Every form consists of 60 multiple-choice questions to be completed in 40 minutes. A single score, based on the number of questions answered…
Fukuzumi, Noriko; Osawa, Kayo; Sato, Itsuko; Iwatani, Sota; Ishino, Ruri; Hayashi, Nobuhide; Iijima, Kazumoto; Saegusa, Jun; Morioka, Ichiro
2016-01-01
Procalcitonin (PCT) levels are elevated early after birth in newborn infants; however, the physiological features and reference of serum PCT concentrations have not been fully studied in preterm infants. The aims of the current study were to establish an age-specific percentile-based reference curve of serum PCT concentrations in preterm infants and determine the features. The PCT concentration peaked in infants at 1 day old and decreased thereafter. At 1 day old, serum PCT concentrations in preterm infants <34 weeks' gestational age were higher than those in late preterm infants between 34 and 36 weeks' gestational age or term infants ≥37 weeks' gestational age. Although the 50-percentile value in late preterm and term infants reached the adult normal level (0.1 ng/mL) at 5 days old, it did not in preterm infants. It took 9 weeks for preterm infants to reach it. Serum PCT concentrations at onset in late-onset infected preterm infants were over the 95-percentile value. We showed that the physiological feature in preterm infants was significantly different from that in late preterm infants, even in those <37 weeks' gestational age. To detect late-onset bacterial infection and sepsis, an age-specific percentile-based reference curve may be useful in preterm infants. PMID:27033746
Fukuzumi, Noriko; Osawa, Kayo; Sato, Itsuko; Iwatani, Sota; Ishino, Ruri; Hayashi, Nobuhide; Iijima, Kazumoto; Saegusa, Jun; Morioka, Ichiro
2016-01-01
Procalcitonin (PCT) levels are elevated early after birth in newborn infants; however, the physiological features and reference of serum PCT concentrations have not been fully studied in preterm infants. The aims of the current study were to establish an age-specific percentile-based reference curve of serum PCT concentrations in preterm infants and determine the features. The PCT concentration peaked in infants at 1 day old and decreased thereafter. At 1 day old, serum PCT concentrations in preterm infants <34 weeks’ gestational age were higher than those in late preterm infants between 34 and 36 weeks’ gestational age or term infants ≥37 weeks’ gestational age. Although the 50-percentile value in late preterm and term infants reached the adult normal level (0.1 ng/mL) at 5 days old, it did not in preterm infants. It took 9 weeks for preterm infants to reach it. Serum PCT concentrations at onset in late-onset infected preterm infants were over the 95-percentile value. We showed that the physiological feature in preterm infants was significantly different from that in late preterm infants, even in those <37 weeks’ gestational age. To detect late-onset bacterial infection and sepsis, an age-specific percentile-based reference curve may be useful in preterm infants. PMID:27033746
Using Percentile Schedules to Increase Eye Contact in Children with Fragile X Syndrome
ERIC Educational Resources Information Center
Hall, Scott S.; Maynes, Natalee P.; Reiss, Allan L.
2009-01-01
Aversion to eye contact is a common behavior of individuals diagnosed with Fragile X syndrome (FXS); however, no studies to date have attempted to increase eye-contact duration in these individuals. In this study, we employed a percentile reinforcement schedule with and without overcorrection to shape eye-contact duration of 6 boys with FXS.…
Student Growth Percentiles Based on MIRT: Implications of Calibrated Projection. CRESST Report 842
ERIC Educational Resources Information Center
Monroe, Scott; Cai, Li; Choi, Kilchan
2014-01-01
This research concerns a new proposal for calculating student growth percentiles (SGP, Betebenner, 2009). In Betebenner (2009), quantile regression (QR) is used to estimate the SGPs. However, measurement error in the score estimates, which always exists in practice, leads to bias in the QR-based estimates (Shang, 2012). One way to address this…
Risk for nonalcoholic fatty liver disease in Hispanic Youth with BMI > or = 95th percentile
Technology Transfer Automated Retrieval System (TEKTRAN)
To characterize children at risk for nonalcoholic fatty liver disease (NAFLD) and to explore possible mechanisms underlying the development of NAFLD in Hispanic youth with a body mass index > or =95th percentile. Hispanic nonoverweight (n = 475) and overweight (n = 517) children, ages 4 to 19 y, wer...
User Guide for the 2014-15 Teacher Median Student Growth Percentile Report
ERIC Educational Resources Information Center
New Jersey Department of Education, 2016
2016-01-01
On March 22, 2016, the New Jersey Department of Education ("the Department") published a broadcast memo sharing secure district access to 2014-15 median Student Growth Percentile (mSGP) data for all qualifying teachers. These data describe student growth from the last school year, and comprise 10% of qualifying teachers' 2014-15…
Gregor, Ivan; Dröge, Johannes; Schirmer, Melanie; Quince, Christopher; McHardy, Alice C
2016-01-01
Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into 'bins' representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies 'training' sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S) software. The new (+) component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4-6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki. PMID:26870609
Ranking scientific publications: the effect of nonlinearity
NASA Astrophysics Data System (ADS)
Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; di, Zengru
2014-10-01
Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected.
Ranking scientific publications: the effect of nonlinearity
Yao, Liyang; Wei, Tian; Zeng, An; Fan, Ying; Di, Zengru
2014-01-01
Ranking the significance of scientific publications is a long-standing challenge. The network-based analysis is a natural and common approach for evaluating the scientific credit of papers. Although the number of citations has been widely used as a metric to rank papers, recently some iterative processes such as the well-known PageRank algorithm have been applied to the citation networks to address this problem. In this paper, we introduce nonlinearity to the PageRank algorithm when aggregating resources from different nodes to further enhance the effect of important papers. The validation of our method is performed on the data of American Physical Society (APS) journals. The results indicate that the nonlinearity improves the performance of the PageRank algorithm in terms of ranking effectiveness, as well as robustness against malicious manipulations. Although the nonlinearity analysis is based on the PageRank algorithm, it can be easily extended to other iterative ranking algorithms and similar improvements are expected. PMID:25322852
Halu, Arda; Mondragón, Raúl J; Panzarasa, Pietro; Bianconi, Ginestra
2013-01-01
Many complex systems can be described as multiplex networks in which the same nodes can interact with one another in different layers, thus forming a set of interacting and co-evolving networks. Examples of such multiplex systems are social networks where people are involved in different types of relationships and interact through various forms of communication media. The ranking of nodes in multiplex networks is one of the most pressing and challenging tasks that research on complex networks is currently facing. When pairs of nodes can be connected through multiple links and in multiple layers, the ranking of nodes should necessarily reflect the importance of nodes in one layer as well as their importance in other interdependent layers. In this paper, we draw on the idea of biased random walks to define the Multiplex PageRank centrality measure in which the effects of the interplay between networks on the centrality of nodes are directly taken into account. In particular, depending on the intensity of the interaction between layers, we define the Additive, Multiplicative, Combined, and Neutral versions of Multiplex PageRank, and show how each version reflects the extent to which the importance of a node in one layer affects the importance the node can gain in another layer. We discuss these measures and apply them to an online multiplex social network. Findings indicate that taking the multiplex nature of the network into account helps uncover the emergence of rankings of nodes that differ from the rankings obtained from one single layer. Results provide support in favor of the salience of multiplex centrality measures, like Multiplex PageRank, for assessing the prominence of nodes embedded in multiple interacting networks, and for shedding a new light on structural properties that would otherwise remain undetected if each of the interacting networks were analyzed in isolation. PMID:24205186
Ranking Refinement via Relevance Feedback in Geographic Information Retrieval
NASA Astrophysics Data System (ADS)
Villatoro-Tello, Esaú; Villaseñor-Pineda, Luis; Montes-Y-Gómez, Manuel
Recent evaluation results from Geographic Information Retrieval (GIR) indicate that current information retrieval methods are effective to retrieve relevant documents for geographic queries, but they have severe difficulties to generate a pertinent ranking of them. Motivated by these results in this paper we present a novel re-ranking method, which employs information obtained through a relevance feedback process to perform a ranking refinement. Performed experiments show that the proposed method allows to improve the generated ranking from a traditional IR machine, as well as results from traditional re-ranking strategies such as query expansion via relevance feedback.
Relationships between walking and percentiles of adiposity inolder and younger men
Williams, Paul T.
2005-06-01
To assess the relationship of weekly walking distance to percentiles of adiposity in elders (age {ge} 75 years), seniors (55 {le} age <75 years), middle-age men (35 {le} age <55 years), and younger men (18 {le} age <35 years old). Cross-sectional analyses of baseline questionnaires from 7,082 male participants of the National Walkers Health Study. The walkers BMIs were inversely and significantly associated with walking distance (kg/m{sup 2} per km/wk) in elders (slope {+-} SE: -0.032 {+-} 0.008), seniors (-0.045 {+-} 0.005), and middle-aged men (-0.037 {+-} 0.007), as were their waist circumferences (-0.091 {+-} 0.025, -0.045 {+-} 0.005, and -0.091 {+-} 0.015 cm per km/wk, respectively), and these slopes remained significant when adjusted statistically for reported weekly servings of meat, fish, fruit, and alcohol. The declines in BMI associated with walking distance were greater at the higher than lower percentiles of the BMI distribution. Specifically, compared to the decline at the 10th BMI percentile, the decline in BMI at the 90th percentile was 5.1-fold greater in elders, 5.9-fold greater in seniors, and 6.7-fold greater in middle-age men. The declines in waist circumference associated with walking distance were also greater among men with broader waistlines. Exercise-induced weight loss (or self-selection) causes an inverse relationship between adiposity and walking distance in men 35 and older that is substantially greater among fatter men.
Gregor, Ivan; Dröge, Johannes; Schirmer, Melanie; Quince, Christopher
2016-01-01
Background. Metagenomics is an approach for characterizing environmental microbial communities in situ, it allows their functional and taxonomic characterization and to recover sequences from uncultured taxa. This is often achieved by a combination of sequence assembly and binning, where sequences are grouped into ‘bins’ representing taxa of the underlying microbial community. Assignment to low-ranking taxonomic bins is an important challenge for binning methods as is scalability to Gb-sized datasets generated with deep sequencing techniques. One of the best available methods for species bins recovery from deep-branching phyla is the expert-trained PhyloPythiaS package, where a human expert decides on the taxa to incorporate in the model and identifies ‘training’ sequences based on marker genes directly from the sample. Due to the manual effort involved, this approach does not scale to multiple metagenome samples and requires substantial expertise, which researchers who are new to the area do not have. Results. We have developed PhyloPythiaS+, a successor to our PhyloPythia(S) software. The new (+) component performs the work previously done by the human expert. PhyloPythiaS+ also includes a new k-mer counting algorithm, which accelerated the simultaneous counting of 4–6-mers used for taxonomic binning 100-fold and reduced the overall execution time of the software by a factor of three. Our software allows to analyze Gb-sized metagenomes with inexpensive hardware, and to recover species or genera-level bins with low error rates in a fully automated fashion. PhyloPythiaS+ was compared to MEGAN, taxator-tk, Kraken and the generic PhyloPythiaS model. The results showed that PhyloPythiaS+ performs especially well for samples originating from novel environments in comparison to the other methods. Availability. PhyloPythiaS+ in a virtual machine is available for installation under Windows, Unix systems or OS X on: https://github.com/algbioi/ppsp/wiki. PMID
Trend estimates of AERONET-observed and model-simulated AOT percentiles between 1993 and 2013
NASA Astrophysics Data System (ADS)
Yoon, Jongmin; Pozzer, Andrea; Chang, Dong Yeong; Lelieveld, Jos
2016-04-01
Recent Aerosol Optical thickness (AOT) trend studies used monthly or annual arithmetic means that discard details of the generally right-skewed AOT distributions. Potentially, such results can be biased by extreme values (including outliers). This study additionally uses percentiles (i.e., the lowest 5%, 25%, 50%, 75% and 95% of the monthly cumulative distributions fitted to Aerosol Robotic Network (AERONET)-observed and ECHAM/MESSy Atmospheric Chemistry (EMAC)-model simulated AOTs) that are less affected by outliers caused by measurement error, cloud contamination and occasional extreme aerosol events. Since the limited statistical representativeness of monthly percentiles and means can lead to bias, this study adopts the number of observations as a weighting factor, which improves the statistical robustness of trend estimates. By analyzing the aerosol composition of AERONET-observed and EMAC-simulated AOTs in selected regions of interest, we distinguish the dominant aerosol types and investigate the causes of regional AOT trends. The simulated and observed trends are generally consistent with a high correlation coefficient (R = 0.89) and small bias (slope±2σ = 0.75 ± 0.19). A significant decrease in EMAC-decomposed AOTs by water-soluble compounds and black carbon is found over the USA and the EU due to environmental regulation. In particular, a clear reversal in the AERONET AOT trend percentiles is found over the USA, probably related to the AOT diurnal cycle and the frequency of wildfires.
ERIC Educational Resources Information Center
Blyth, Kathryn
2014-01-01
This article considers the Australian entry score system, the Australian Tertiary Admissions Rank (ATAR), and its usage as a selection mechanism for undergraduate places in Australian higher education institutions and asks whether its role as the main selection criterion will continue with the introduction of demand driven funding in 2012.…
Quantum navigation and ranking in complex networks.
Sánchez-Burillo, Eduardo; Duch, Jordi; Gómez-Gardeñes, Jesús; Zueco, David
2012-01-01
Complex networks are formal frameworks capturing the interdependencies between the elements of large systems and databases. This formalism allows to use network navigation methods to rank the importance that each constituent has on the global organization of the system. A key example is Pagerank navigation which is at the core of the most used search engine of the World Wide Web. Inspired in this classical algorithm, we define a quantum navigation method providing a unique ranking of the elements of a network. We analyze the convergence of quantum navigation to the stationary rank of networks and show that quantumness decreases the number of navigation steps before convergence. In addition, we show that quantum navigation allows to solve degeneracies found in classical ranks. By implementing the quantum algorithm in real networks, we confirm these improvements and show that quantum coherence unveils new hierarchical features about the global organization of complex systems. PMID:22930671
Quantum Navigation and Ranking in Complex Networks
Sánchez-Burillo, Eduardo; Duch, Jordi; Gómez-Gardeñes, Jesús; Zueco, David
2012-01-01
Complex networks are formal frameworks capturing the interdependencies between the elements of large systems and databases. This formalism allows to use network navigation methods to rank the importance that each constituent has on the global organization of the system. A key example is Pagerank navigation which is at the core of the most used search engine of the World Wide Web. Inspired in this classical algorithm, we define a quantum navigation method providing a unique ranking of the elements of a network. We analyze the convergence of quantum navigation to the stationary rank of networks and show that quantumness decreases the number of navigation steps before convergence. In addition, we show that quantum navigation allows to solve degeneracies found in classical ranks. By implementing the quantum algorithm in real networks, we confirm these improvements and show that quantum coherence unveils new hierarchical features about the global organization of complex systems. PMID:22930671
Outflanking the Rankings Industry
ERIC Educational Resources Information Center
McGuire, Patricia
2007-01-01
In this article, the author argues that American higher education is allowing itself to be held hostage by the rankings industry, which can lead institutions to consider actions harmful to the public interest and encourage the public's infatuation with celebrity at the expense of substance. Instead of sitting quietly by during the upcoming ratings…
The basis of the ranking is 10 monitoring studies chosen to represent "typical" concentrations of the pollutants found indoors. The studies were conducted in the United States during the last 15 years, and mainly focused on concentrations of pollutants in homes, schools, and off...
ERIC Educational Resources Information Center
Change, 1992
1992-01-01
Ten higher education professionals and one college senior comment on the "U.S. News and World Report" rankings of doctoral programs in six liberal arts disciplines. The authors' response to one set of comments and the comments of an executive editor from the magazine are also included. (MSE)
Diversifying customer review rankings.
Krestel, Ralf; Dokoohaki, Nima
2015-06-01
E-commerce Web sites owe much of their popularity to consumer reviews accompanying product descriptions. On-line customers spend hours and hours going through heaps of textual reviews to decide which products to buy. At the same time, each popular product has thousands of user-generated reviews, making it impossible for a buyer to read everything. Current approaches to display reviews to users or recommend an individual review for a product are based on the recency or helpfulness of each review. In this paper, we present a framework to rank product reviews by optimizing the coverage of the ranking with respect to sentiment or aspects, or by summarizing all reviews with the top-K reviews in the ranking. To accomplish this, we make use of the assigned star rating for a product as an indicator for a review's sentiment polarity and compare bag-of-words (language model) with topic models (latent Dirichlet allocation) as a mean to represent aspects. Our evaluation on manually annotated review data from a commercial review Web site demonstrates the effectiveness of our approach, outperforming plain recency ranking by 30% and obtaining best results by combining language and topic model representations. PMID:25795511
College Rankings. ERIC Digest.
ERIC Educational Resources Information Center
Holub, Tamara
The popularity of college ranking surveys published by "U.S. News and World Report" and other magazines is indisputable, but the methodologies used to measure the quality of higher education institutions have come under fire by scholars and college officials. Criticisms have focused on methodological flaws, such as failure to consider differences…
Ranking Adverse Drug Reactions With Crowdsourcing
Gottlieb, Assaf; Hoehndorf, Robert; Dumontier, Michel
2015-01-01
Background There is no publicly available resource that provides the relative severity of adverse drug reactions (ADRs). Such a resource would be useful for several applications, including assessment of the risks and benefits of drugs and improvement of patient-centered care. It could also be used to triage predictions of drug adverse events. Objective The intent of the study was to rank ADRs according to severity. Methods We used Internet-based crowdsourcing to rank ADRs according to severity. We assigned 126,512 pairwise comparisons of ADRs to 2589 Amazon Mechanical Turk workers and used these comparisons to rank order 2929 ADRs. Results There is good correlation (rho=.53) between the mortality rates associated with ADRs and their rank. Our ranking highlights severe drug-ADR predictions, such as cardiovascular ADRs for raloxifene and celecoxib. It also triages genes associated with severe ADRs such as epidermal growth-factor receptor (EGFR), associated with glioblastoma multiforme, and SCN1A, associated with epilepsy. Conclusions ADR ranking lays a first stepping stone in personalized drug risk assessment. Ranking of ADRs using crowdsourcing may have useful clinical and financial implications, and should be further investigated in the context of health care decision making. PMID:25800813
The role of entropy in word ranking
NASA Astrophysics Data System (ADS)
Mehri, Ali; Darooneh, Amir H.
2011-09-01
Entropy as a measure of complexity in the systems has been applied for ranking the words in the human written texts. We introduce a novel approach to evaluate accuracy for retrieved indices. We also have an illustrative comparison between proposed entropic metrics and some other methods in extracting the keywords. It seems that, some of the discussed metrics apply similar features for word ranking in the text. This work recommend the entropy as a systematic measure in text mining.
A note on rank reduction in sparse multivariate regression
Chen, Kun; Chan, Kung-Sik
2016-01-01
A reduced-rank regression with sparse singular value decomposition (RSSVD) approach was proposed by Chen et al. for conducting variable selection in a reduced-rank model. To jointly model the multivariate response, the method efficiently constructs a prespecified number of latent variables as some sparse linear combinations of the predictors. Here, we generalize the method to also perform rank reduction, and enable its usage in reduced-rank vector autoregressive (VAR) modeling to perform automatic rank determination and order selection. We show that in the context of stationary time-series data, the generalized approach correctly identifies both the model rank and the sparse dependence structure between the multivariate response and the predictors, with probability one asymptotically. We demonstrate the efficacy of the proposed method by simulations and analyzing a macro-economical multivariate time series using a reduced-rank VAR model. PMID:26997938
Let your users do the ranking.
Spomer, Judith E.
2010-12-01
Ranking search results is a thorny issue for enterprise search. Search engines rank results using a variety of sophisticated algorithms, but users still complain that search can't ever seem to find anything useful or relevant! The challenge is to provide results that are ranked according to the users' definition of relevancy. Sandia National Laboratories has enhanced its commercial search engine to discover user preferences, re-ranking results accordingly. Immediate positive impact was achieved by modeling historical data consisting of user queries and subsequent result clicks. New data is incorporated into the model daily. An important benefit is that results improve naturally and automatically over time as a function of user actions. This session presents the method employed, how it was integrated with the search engine,metrics illustrating the subsequent improvement to the users' search experience, and plans for implementation with Sandia's FAST for SharePoint 2010 search engine.
ERIC Educational Resources Information Center
CURRY, JOHN
IN ORDER TO ESTABLISH THE FEASIBILITY OF A CUT-OFF SCORE FOR ENTRANCE INTO TEACHER EDUCATION PROGRAMS AT NORTH TEXAS STATE UNIVERSITY, SCORES OF 1,346 STUDENTS WHO EITHER PLACED ABOVE THE 80TH PERCENTILE (N-672) OR BELOW THE 20TH PERCENTILE (N-674) ON EITHER THE SCHOOL AND COLLEGE ABILITY TEST OR THE WATSON-GLASER TEST OF CRITICAL THINKING WERE…
University Rankings and Social Science
ERIC Educational Resources Information Center
Marginson, Simon
2014-01-01
University rankings widely affect the behaviours of prospective students and their families, university executive leaders, academic faculty, governments and investors in higher education. Yet the social science foundations of global rankings receive little scrutiny. Rankings that simply recycle reputation without any necessary connection to real…
Hierarchical Rank Aggregation with Applications to Nanotoxicology
Telesca, Donatello; Rallo, Robert; George, Saji; Xia, Tian; Nel, André E.
2014-01-01
The development of high throughput screening (HTS) assays in the field of nanotoxicology provide new opportunities for the hazard assessment and ranking of engineered nanomaterials (ENMs). It is often necessary to rank lists of materials based on multiple risk assessment parameters, often aggregated across several measures of toxicity and possibly spanning an array of experimental platforms. Bayesian models coupled with the optimization of loss functions have been shown to provide an effective framework for conducting inference on ranks. In this article we present various loss-function-based ranking approaches for comparing ENM within experiments and toxicity parameters. Additionally, we propose a framework for the aggregation of ranks across different sources of evidence while allowing for differential weighting of this evidence based on its reliability and importance in risk ranking. We apply these methods to high throughput toxicity data on two human cell-lines, exposed to eight different nanomaterials, and measured in relation to four cytotoxicity outcomes. This article has supplementary material online. PMID:24839387
Energy Science and Technology Software Center (ESTSC)
2009-08-13
This database application (commonly called the Supermodel) provides a repository for managing critical facility/project information, allows the user to subjectively an objectively assess key criteria , quantify project risks, develop ROM cost estimates, determine facility/project end states, ultimately performing risk-based modeling to rank facilities/project based on risk, sequencing project schedules and provides an optimized recommended sequencing/scheduling of these projects which maximize the S&M cost savings to perform closure projects which benefit all stakeholders.
Ordinal Distance Metric Learning for Image Ranking.
Li, Changsheng; Liu, Qingshan; Liu, Jing; Lu, Hanqing
2015-07-01
Recently, distance metric learning (DML) has attracted much attention in image retrieval, but most previous methods only work for image classification and clustering tasks. In this brief, we focus on designing ordinal DML algorithms for image ranking tasks, by which the rank levels among the images can be well measured. We first present a linear ordinal Mahalanobis DML model that tries to preserve both the local geometry information and the ordinal relationship of the data. Then, we develop a nonlinear DML method by kernelizing the above model, considering of real-world image data with nonlinear structures. To further improve the ranking performance, we finally derive a multiple kernel DML approach inspired by the idea of multiple-kernel learning that performs different kernel operators on different kinds of image features. Extensive experiments on four benchmarks demonstrate the power of the proposed algorithms against some related state-of-the-art methods. PMID:25163071
Texture classification by local rank correlation
NASA Technical Reports Server (NTRS)
Harwood, D.; Subbarao, M.; Davis, L. S.
1985-01-01
A new approach to texture classification based on local rank correlation is proposed here. Its performance is compared with Laws' method which uses local convolution with feature masks. In the experiments, texture samples are classified based on their distribution of local statistics, either rank correlations or convolutions. The new method achieves generally optimal classification rates. It appears to be more robust because local order statistics are unaffected by local sample differences due to monotonic shifts of texture gray values and are less sensitive to noise.
Taber, Daniel R; Stevens, June; Poole, Charles; Maciejewski, Matthew L; Evenson, Kelly R; Ward, Dianne S
2012-02-01
Evidence is conflicting as to whether youth obesity prevalence has reached a plateau in the United States overall. Trends vary by state, and experts recommend exploring whether trends in weight-related behaviors are associated with changes in weight status trends. Thus, our objective was to estimate between-state variation in time trends of adolescent body mass index (BMI) percentile and weight-related behaviors from 2001 to 2007. A time series design combined cross-sectional Youth Risk Behavior Survey data from 272,044 adolescents in 29 states from 2001 to 2007. Self-reported height, weight, sports participation, physical education, television viewing, and daily consumption of 100% fruit juice, milk, and fruits and vegetables were collected. Linear mixed models estimated state variance in time trends of behaviors and BMI percentile. Across states, BMI percentile trends were consistent despite differences in behavioral trends. Boys experienced a modest linear increase in BMI percentile (ß = 0.18, 95% CI: 0.07, 0.30); girls experienced a non-linear increase, as the rate of increase declined over time from 1.02 units in 2001-2002 (95% CI: 0.68, 1.36) to 0.23 units in 2006-2007 (95% CI: -0.09, 0.56). States in which BMI percentile decreased experienced a greater decrease in TV viewing than states where BMI percentile increased. Otherwise, states with disparate BMI percentile trends did not differ with respect to behaviors. Future research should explore the role of other behaviors (e.g., soda consumption), measurement units (e.g., portion size), and societal trends (e.g., urban sprawl) on state and national adiposity trends. PMID:21773818
MacLean, Alair
2010-01-01
This article examines the effects of peacetime cold war military service on the life course according to four potentially overlapping theories that state that military service (1) was a disruption, (2) was a positive turning point, (3) allowed veterans to accumulate advantage, and (4) was an agent of social reproduction. The article argues that the extent to which the effect of military service on veterans' lives corresponds with one or another of the preceding theories depends on historical shifts in three dimensions: conscription, conflict, and benefits. Military service during the peacetime draft era of the late 1950s had a neutral effect on the socioeconomic attainment of enlisted veterans. However, it had a positive effect on veterans who served as officers, which partly stemmed from status reproduction and selection. Yet net of pre-service and educational differences by rank, officers in this peacetime draft era were still able to accumulate advantage. PMID:20842210
Osei, E.K.; Amoh, G.E.A.; Schandorf, C.
1997-02-01
The study of people`s perception and acceptability of risk is important in understanding the public reaction to technology and its environmental and health impact. The perception of risk depends on several factors, including early experiences, education, controllability of the risk, the type of consequence, and the type of person(s) who makes the judgment. This paper reviews some of the main factors influencing people`s perception and acceptability of risk. Knowledge about which factors influence the perception of risk may enhance the understanding of different points of view brought into risk controversies, improve risk communication, and facilitate policy making. Results from a risk ranking by perception survey Conducted in Ghana are also presented. 18 refs., 8 figs., 1 tab.
Development of a Three-Dimensional Finite Element Chest Model for the 5(th) Percentile Female.
Kimpara, Hideyuki; Lee, Jong B; Yang, King H; King, Albert I; Iwamoto, Masami; Watanabe, Isao; Miki, Kazuo
2005-11-01
Several three-dimensional (3D) finite element (FE) models of the human body have been developed to elucidate injury mechanisms due to automotive crashes. However, these models are mainly focused on 50(th) percentile male. As a first step towards a better understanding of injury biomechanics in the small female, a 3D FE model of a 5(th) percentile female human chest (FEM-5F) has been developed and validated against experimental data obtained from two sets of frontal impact, one set of lateral impact, two sets of oblique impact and a series of ballistic impacts. Two previous FE models, a small female Total HUman Model for Safety (THUMS-AF05) occupant version 1.0Beta (Kimpara et al. 2002) and the Wayne State University Human Thoracic Model (WSUHTM, Wang 1995 and Shah et al. 2001) were integrated and modified for this model development. The model incorporated not only geometrical gender differences, such as location of the internal organs and structure of the bony skeleton, but also the biomechanical differences of the ribs due to gender. It includes a detailed description of the sternum, ribs, costal cartilage, thoracic spine, skin, superficial muscles, intercostal muscles, heart, lung, diaphragm, major blood vessels and simplified abdominal internal organs and has been validated against a series of six cadaveric experiments on the small female reported by Nahum et al. (1970), Kroell et al. (1974), Viano (1989), Talantikite et al. (1998) and Wilhelm (2003). Results predicted by the model were well-matched to these experimental data for a range of impact speeds and impactor masses. More research is needed in order to increase the accuracy of predicting rib fractures so that the mechanisms responsible for small female injury can be more clearly defined. PMID:17096277
Weber, G. F.; Laudal, D. L.
1989-01-01
This work is a compilation of reports on ongoing research at the University of North Dakota. Topics include: Control Technology and Coal Preparation Research (SO{sub x}/NO{sub x} control, waste management), Advanced Research and Technology Development (turbine combustion phenomena, combustion inorganic transformation, coal/char reactivity, liquefaction reactivity of low-rank coals, gasification ash and slag characterization, fine particulate emissions), Combustion Research (fluidized bed combustion, beneficiation of low-rank coals, combustion characterization of low-rank coal fuels, diesel utilization of low-rank coals), Liquefaction Research (low-rank coal direct liquefaction), and Gasification Research (hydrogen production from low-rank coals, advanced wastewater treatment, mild gasification, color and residual COD removal from Synfuel wastewaters, Great Plains Gasification Plant, gasifier optimization).
Wikipedia ranking of world universities
NASA Astrophysics Data System (ADS)
Lages, José; Patt, Antoine; Shepelyansky, Dima L.
2016-03-01
We use the directed networks between articles of 24 Wikipedia language editions for producing the wikipedia ranking of world Universities (WRWU) using PageRank, 2DRank and CheiRank algorithms. This approach allows to incorporate various cultural views on world universities using the mathematical statistical analysis independent of cultural preferences. The Wikipedia ranking of top 100 universities provides about 60% overlap with the Shanghai university ranking demonstrating the reliable features of this approach. At the same time WRWU incorporates all knowledge accumulated at 24 Wikipedia editions giving stronger highlights for historically important universities leading to a different estimation of efficiency of world countries in university education. The historical development of university ranking is analyzed during ten centuries of their history.
MRI Contrasts in High Rank Rotating Frames
Liimatainen, Timo; Hakkarainen, Hanne; Mangia, Silvia; Huttunen, Janne M.J.; Storino, Christine; Idiyatullin, Djaudat; Sorce, Dennis; Garwood, Michael; Michaeli, Shalom
2014-01-01
Purpose MRI relaxation measurements are performed in the presence of a fictitious magnetic field in the recently described technique known as RAFF (Relaxation Along a Fictitious Field). This method operates in the 2nd rotating frame (rank n = 2) by utilizing a non-adiabatic sweep of the radiofrequency effective field to generate the fictitious magnetic field. In the present study, the RAFF method is extended for generating MRI contrasts in rotating frames of ranks 1 ≤ n ≤ 5. The developed method is entitled RAFF in rotating frame of rank n (RAFFn). Methods RAFFn pulses were designed to generate fictitious fields that allow locking of magnetization in rotating frames of rank n. Contrast generated with RAFFn was studied using Bloch-McConnell formalism together with experiments on human and rat brains. Results Tolerance to B0 and B1 inhomogeneities and reduced specific absorption rate with increasing n in RAFFn were demonstrated. Simulations of exchange-induced relaxations revealed enhanced sensitivity of RAFFn to slow exchange. Consistent with such feature, an increased grey/white matter contrast was observed in human and rat brain as n increased. Conclusion RAFFn is a robust and safe rotating frame relaxation method to access slow molecular motions in vivo. PMID:24523028
A low rank approach to automatic differentiation.
Abdel-Khalik, H. S.; Hovland, P. D.; Lyons, A.; Stover, T. E.; Utke, J.; Mathematics and Computer Science; North Carolina State Univ.; Univ. of Chicago
2008-01-01
This manuscript introduces a new approach for increasing the efficiency of automatic differentiation (AD) computations for estimating the first order derivatives comprising the Jacobian matrix of a complex large-scale computational model. The objective is to approximate the entire Jacobian matrix with minimized computational and storage resources. This is achieved by finding low rank approximations to a Jacobian matrix via the Efficient Subspace Method (ESM). Low rank Jacobian matrices arise in many of today's important scientific and engineering problems, e.g. nuclear reactor calculations, weather climate modeling, geophysical applications, etc. A low rank approximation replaces the original Jacobian matrix J (whose size is dictated by the size of the input and output data streams) with matrices of much smaller dimensions (determined by the numerical rank of the Jacobian matrix). This process reveals the rank of the Jacobian matrix and can be obtained by ESM via a series of r randomized matrix-vector products of the form: Jq, and J{sup T} {omega} which can be evaluated by the AD forward and reverse modes, respectively.
Ranking nodes in growing networks: When PageRank fails
NASA Astrophysics Data System (ADS)
Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng
2015-11-01
PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes.
NASA Astrophysics Data System (ADS)
Audigane, P.; Rohmer, J.; Manceau, J. C.
2014-12-01
The long term fate of mobile CO2 remaining after the injection period is a crucial issue for regulators and operators. There are needs to evaluate properly the amount of gas free to migrate and to estimate the fluid movements at long time scales. Often the difficulty is to manage the computational time to assess the large time and dimension scale of the problem. The second limitation is the large level of uncertainty associated to the computation prediction. A variance-based global sensitivity analysis is proposed to assess the importance ranking of uncertainty sources, with regards to the behavior of the mobile CO2 during the post-injection period. We consider three output parameters which characterize the location and the quantity of mobile CO2, considering residual and dissolution trapping. To circumvent both (i) the large number of computationally intensive reservoir-scale flow simulations and (ii) the different nature of uncertainties whether linked to parameters (continuous variables) or to modeling assumptions (scenario-like variables) we propose to use advanced meta-modeling techniques of ACOSSO-type. The feasibility of the approach is demonstrated using a potential site for CO2 storage in the Paris basin (France), for which the amount, nature and quality of the data available at disposal and the associated uncertainties can be seen as representative to those of a storage project at the post-screening stage. A special attention has been paid to confront the results of the sensitivity analysis with the physical interpretation of the processes.
Stenner, R.D.; Cramer, K.H.; Higley, K.A.; Jette, S.J.; Lamar, D.A.; McLaughlin, T.J.; Sherwood, D.R.; Van Houten, N.C.
1988-10-01
The purpose of this report is to formally document the individual site Hazard Ranking System (HRS) evaluations conducted as part of the preliminary assessment/site inspection (PA/SI) activities at the US Department of Energy (DOE) Hanford Site. These activities were carried out pursuant to the DOE orders that describe the Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) Program addressing the cleanup of inactive waste sites. These orders incorporate the US Environmental Protection Agency methodology, which is based on the Superfund Amendments and Reauthorization Act of 1986 (SARA). The methodology includes six parts: PA/SI, remedial investigation/feasibility study, record of decision, design and implementation of remedial action, operation and monitoring, and verification monitoring. Volume 1 of this report discusses the CERCLA inactive waste-site evaluation process, assumptions, and results of the HRS methodology employed. Volume 2 presents the data on the individual CERCLA engineered-facility sites at Hanford, as contained in the Hanford Inactive Site Surveillance (HISS) Data Base. Volume 3 presents the data on the individual CERCLA unplanned-release sites at Hanford, as contained in the HISS Data Base. 34 refs., 43 figs., 47 tabs.
NASA Astrophysics Data System (ADS)
Hosking, Michael Robert
This dissertation improves an analyst's use of simulation by offering improvements in the utilization of kriging metamodels. There are three main contributions. First an analysis is performed of what comprises good experimental designs for practical (non-toy) problems when using a kriging metamodel. Second is an explanation and demonstration of how reduced rank decompositions can improve the performance of kriging, now referred to as reduced rank kriging. Third is the development of an extension of reduced rank kriging which solves an open question regarding the usage of reduced rank kriging in practice. This extension is called omni-rank kriging. Finally these results are demonstrated on two case studies. The first contribution focuses on experimental design. Sequential designs are generally known to be more efficient than "one shot" designs. However, sequential designs require some sort of pilot design from which the sequential stage can be based. We seek to find good initial designs for these pilot studies, as well as designs which will be effective if there is no following sequential stage. We test a wide variety of designs over a small set of test-bed problems. Our findings indicate that analysts should take advantage of any prior information they have about their problem's shape and/or their goals in metamodeling. In the event of a total lack of information we find that Latin hypercube designs are robust default choices. Our work is most distinguished by its attention to the higher levels of dimensionality. The second contribution introduces and explains an alternative method for kriging when there is noise in the data, which we call reduced rank kriging. Reduced rank kriging is based on using a reduced rank decomposition which artificially smoothes the kriging weights similar to a nugget effect. Our primary focus will be showing how the reduced rank decomposition propagates through kriging empirically. In addition, we show further evidence for our
Obsession with Rankings Goes Global
ERIC Educational Resources Information Center
Labi, Aisha
2008-01-01
A Chinese list of the world's top universities would seem an unlikely concern for French politicians. But this year, France's legislature took aim at the annual rankings produced by Shanghai Jiao Tong University, which claims to list the 500 best universities in the world. The highest-ranked French entry, Universite Pierre et Marie Curie, comes in…
University Rankings in Critical Perspective
ERIC Educational Resources Information Center
Pusser, Brian; Marginson, Simon
2013-01-01
This article addresses global postsecondary ranking systems by using critical-theoretical perspectives on power. This research suggests rankings are at once a useful lens for studying power in higher education and an important instrument for the exercise of power in service of dominant norms in global higher education. (Contains 1 table and 1…
Technical Pitfalls in University Rankings
ERIC Educational Resources Information Center
Bougnol, Marie-Laure; Dulá, Jose H.
2015-01-01
Academicians, experts, and other stakeholders have contributed extensively to the literature on university rankings also known as "league tables". Often the tone is critical usually focused on the subjective aspects of the process; e.g., the list of the universities' attributes used in the rankings, their respective weights, and the size…
University Ranking as Social Exclusion
ERIC Educational Resources Information Center
Amsler, Sarah S.; Bolsmann, Chris
2012-01-01
In this article we explore the dual role of global university rankings in the creation of a new, knowledge-identified, transnational capitalist class and in facilitating new forms of social exclusion. We examine how and why the practice of ranking universities has become widely defined by national and international organisations as an important…
The Ranking Phenomenon and the Experience of Academics in Taiwan
ERIC Educational Resources Information Center
Lo, William Yat Wai
2014-01-01
The primary aim of the paper is to examine how global university rankings have influenced the higher education sector in Taiwan from the perspective of academics. A qualitative case study method was used to examine how university ranking influenced the Taiwanese higher education at institutional and individual levels, respectively, thereby…
Chemical comminution and deashing of low-rank coals
Quigley, David R.
1992-01-01
A method of chemically comminuting a low-rank coal while at the same time increasing the heating value of the coal. A strong alkali solution is added to a low-rank coal to solubilize the carbonaceous portion of the coal, leaving behind the noncarbonaceous mineral matter portion. The solubilized coal is precipitated from solution by a multivalent cation, preferably calcium.
Chemical comminution and deashing of low-rank coals
Quigley, David R.
1992-12-01
A method of chemically comminuting a low-rank coal while at the same time increasing the heating value of the coal. A strong alkali solution is added to a low-rank coal to solubilize the carbonaceous portion of the coal, leaving behind the noncarbonaceous mineral matter portion. The solubilized coal is precipitated from solution by a multivalent cation, preferably calcium.
Likelihoods for fixed rank nomination networks.
Hoff, Peter; Fosdick, Bailey; Volfovsky, Alex; Stovel, Katherine
2013-12-01
Many studies that gather social network data use survey methods that lead to censored, missing, or otherwise incomplete information. For example, the popular fixed rank nomination (FRN) scheme, often used in studies of schools and businesses, asks study participants to nominate and rank at most a small number of contacts or friends, leaving the existence of other relations uncertain. However, most statistical models are formulated in terms of completely observed binary networks. Statistical analyses of FRN data with such models ignore the censored and ranked nature of the data and could potentially result in misleading statistical inference. To investigate this possibility, we compare Bayesian parameter estimates obtained from a likelihood for complete binary networks with those obtained from likelihoods that are derived from the FRN scheme, and therefore accommodate the ranked and censored nature of the data. We show analytically and via simulation that the binary likelihood can provide misleading inference, particularly for certain model parameters that relate network ties to characteristics of individuals and pairs of individuals. We also compare these different likelihoods in a data analysis of several adolescent social networks. For some of these networks, the parameter estimates from the binary and FRN likelihoods lead to different conclusions, indicating the importance of analyzing FRN data with a method that accounts for the FRN survey design. PMID:25110586
StructRank: a new approach for ligand-based virtual screening.
Rathke, Fabian; Hansen, Katja; Brefeld, Ulf; Müller, Klaus-Robert
2011-01-24
Screening large libraries of chemical compounds against a biological target, typically a receptor or an enzyme, is a crucial step in the process of drug discovery. Virtual screening (VS) can be seen as a ranking problem which prefers as many actives as possible at the top of the ranking. As a standard, current Quantitative Structure-Activity Relationship (QSAR) models apply regression methods to predict the level of activity for each molecule and then sort them to establish the ranking. In this paper, we propose a top-k ranking algorithm (StructRank) based on Support Vector Machines to solve the early recognition problem directly. Empirically, we show that our ranking approach outperforms not only regression methods but another ranking approach recently proposed for QSAR ranking, RankSVM, in terms of actives found. PMID:21166393
Universal scaling in sports ranking
NASA Astrophysics Data System (ADS)
Deng, Weibing; Li, Wei; Cai, Xu; Bulou, Alain; Wang, Qiuping A.
2012-09-01
Ranking is a ubiquitous phenomenon in human society. On the web pages of Forbes, one may find all kinds of rankings, such as the world's most powerful people, the world's richest people, the highest-earning tennis players, and so on and so forth. Herewith, we study a specific kind—sports ranking systems in which players' scores and/or prize money are accrued based on their performances in different matches. By investigating 40 data samples which span 12 different sports, we find that the distributions of scores and/or prize money follow universal power laws, with exponents nearly identical for most sports. In order to understand the origin of this universal scaling we focus on the tennis ranking systems. By checking the data we find that, for any pair of players, the probability that the higher-ranked player tops the lower-ranked opponent is proportional to the rank difference between the pair. Such a dependence can be well fitted to a sigmoidal function. By using this feature, we propose a simple toy model which can simulate the competition of players in different matches. The simulations yield results consistent with the empirical findings. Extensive simulation studies indicate that the model is quite robust with respect to the modifications of some parameters.
A theory of measuring, electing, and ranking
Balinski, Michel; Laraki, Rida
2007-01-01
The impossibility theorems that abound in the theory of social choice show that there can be no satisfactory method for electing and ranking in the context of the traditional, 700-year-old model. A more realistic model, whose antecedents may be traced to Laplace and Galton, leads to a new theory that avoids all impossibilities with a simple and eminently practical method, “the majority judgement.” It has already been tested. PMID:17496140
ERIC Educational Resources Information Center
Lash, Andrea; Makkonen, Reino; Tran, Loan; Huang, Min
2016-01-01
This study, undertaken at the request of the Nevada Department of Education, examined the stability over years of teacher-level growth scores from the Student Growth Percentile (SGP) model, which many states and districts have selected as a measure of effectiveness in their teacher evaluation systems. The authors conducted a generalizability study…
The Association of Weight Percentile and Motor Vehicle Crash Injury Among 3 to 8 Year Old Children
Zonfrillo, Mark R.; Nelson, Kyle A.; Durbin, Dennis R.; Kallan, Michael J.
2010-01-01
The use of age-appropriate child restraint systems significantly reduces injury and death associated with motor vehicle crashes (MVCs). Pediatric obesity has become a global epidemic. Although recent evidence suggests a possible association between pediatric obesity and MVC-related injury, there are potential misclassifications of body mass index from under-estimated height in younger children. Given this limitation, age- and sex-specific weight percentiles can be used as a proxy of weight status. The specific aim of this study was to determine the association between weight percentile and the risk of significant injury for children 3–8 years in MVCs. This was a cross-sectional study of children aged 3–8 years in MVCs in 16 US states, with data collected via insurance claims records and a telephone survey from 12/1/98–11/30/07. Parent-reported injuries with an abbreviated Injury Scale (AIS) score of 2+ indicated a clinically significant injury. Age- and sex-specific weight percentiles were calculated using pediatric norms. The study sample included 9,327 children aged 3–8 years (weighted to represent 157,878 children), of which 0.96% sustained clinically significant injuries. There was no association between weight percentiles and overall injury when adjusting for restraint type (p=0.71). However, increasing weight percentiles were associated with lower extremity injuries at a level that approached significance (p=0.053). Further research is necessary to describe mechanisms for weight-related differences in injury risk. Parents should continue to properly restrain their children in accordance with published guidelines. PMID:21050602
NASA Technical Reports Server (NTRS)
Lawrence, C.; Somers, J. T.; Baldwin, M. A.; Wells, J. A.; Newby, N.; Currie, N. J.
2014-01-01
NASA spacecraft design requirements for occupant protection are a combination of the Brinkley criteria and injury metrics extracted from anthropomorphic test devices (ATD's). For the ATD injury metrics, the requirements specify the use of the 5th percentile female Hybrid III and the 95th percentile male Hybrid III. Furthermore, each of these ATD's is required to be fitted with an articulating pelvis and a straight spine. The articulating pelvis is necessary for the ATD to fit into spacecraft seats, while the straight spine is required as injury metrics for vertical accelerations are better defined for this configuration. The requirements require that physical testing be performed with both ATD's to demonstrate compliance. Before compliance testing can be conducted, extensive modeling and simulation are required to determine appropriate test conditions, simulate conditions not feasible for testing, and assess design features to better ensure compliance testing is successful. While finite element (FE) models are currently available for many of the physical ATD's, currently there are no complete models for either the 5th percentile female or the 95th percentile male Hybrid III with a straight spine and articulating pelvis. The purpose of this work is to assess the accuracy of the existing Livermore Software Technology Corporation's FE models of the 5th and 95th percentile ATD's. To perform this assessment, a series of tests will be performed at Wright Patterson Air Force Research Lab using their horizontal impact accelerator sled test facility. The ATD's will be placed in the Orion seat with a modified-advanced-crew-escape-system (MACES) pressure suit and helmet, and driven with loadings similar to what is expected for the actual Orion vehicle during landing, launch abort, and chute deployment. Test data will be compared to analytical predictions and modelling uncertainty factors will be determined for each injury metric. Additionally, the test data will be used to
Influence Analysis of Ranking Data.
ERIC Educational Resources Information Center
Poon, Wai-Yin; Chan, Wai
2002-01-01
Developed diagnostic measures to identify observations in Thurstonian models for ranking data that unduly influence parameter estimates obtained by the partition maximum likelihood approach of W. Chan and P. Bender (1998). (SLD)
RANK and RANK ligand expression in primary human osteosarcoma.
Branstetter, Daniel; Rohrbach, Kathy; Huang, Li-Ya; Soriano, Rosalia; Tometsko, Mark; Blake, Michelle; Jacob, Allison P; Dougall, William C
2015-09-01
Receptor activator of nuclear factor kappa-B ligand (RANKL) is an essential mediator of osteoclast formation, function and survival. In patients with solid tumor metastasis to the bone, targeting the bone microenvironment by inhibition of RANKL using denosumab, a fully human monoclonal antibody (mAb) specific to RANKL, has been demonstrated to prevent tumor-induced osteolysis and subsequent skeletal complications. Recently, a prominent functional role for the RANKL pathway has emerged in the primary bone tumor giant cell tumor of bone (GCTB). Expression of both RANKL and RANK is extremely high in GCTB tumors and denosumab treatment was associated with tumor regression and reduced tumor-associated bone lysis in GCTB patients. In order to address the potential role of the RANKL pathway in another primary bone tumor, this study assessed human RANKL and RANK expression in human primary osteosarcoma (OS) using specific mAbs, validated and optimized for immunohistochemistry (IHC) or flow cytometry. Our results demonstrate RANKL expression was observed in the tumor element in 68% of human OS using IHC. However, the staining intensity was relatively low and only 37% (29/79) of samples exhibited≥10% RANKL positive tumor cells. RANK expression was not observed in OS tumor cells. In contrast, RANK expression was clearly observed in other cells within OS samples, including the myeloid osteoclast precursor compartment, osteoclasts and in giant osteoclast cells. The intensity and frequency of RANKL and RANK staining in OS samples were substantially less than that observed in GCTB samples. The observation that RANKL is expressed in OS cells themselves suggests that these tumors may mediate an osteoclastic response, and anti-RANKL therapy may potentially be protective against bone pathologies in OS. However, the absence of RANK expression in primary human OS cells suggests that any autocrine RANKL/RANK signaling in human OS tumor cells is not operative, and anti-RANKL therapy
Reduced-Rank Adaptive Filtering Using Krylov Subspace
NASA Astrophysics Data System (ADS)
Burykh, Sergueï; Abed-Meraim, Karim
2003-12-01
A unified view of several recently introduced reduced-rank adaptive filters is presented. As all considered methods use Krylov subspace for rank reduction, the approach taken in this work is inspired from Krylov subspace methods for iterative solutions of linear systems. The alternative interpretation so obtained is used to study the properties of each considered technique and to relate one reduced-rank method to another as well as to algorithms used in computational linear algebra. Practical issues are discussed and low-complexity versions are also included in our study. It is believed that the insight developed in this paper can be further used to improve existing reduced-rank methods according to known results in the domain of Krylov subspace methods.
Timpe, R.C.
1995-04-01
Current analytical methods are inadequate for accurately measuring sulfur forms in coal. This task was concerned with developing methods to quantitate and identify major sulfur forms in coal based on direct measurement (as opposed to present techniques based on indirect measurement and difference values). The focus was on the forms that were least understood and for which the analytical methods have been the poorest, i.e., organic and elemental sulfur. Improved measurement techniques for sulfatic and pyritic sulfur also need to be developed. A secondary goal was to understand the interconversion of sulfur forms in coal during thermal processing. This task had as its focus the development of selective extraction methods that will allow the direct measurement of sulfur content in each form. Therefore, selective extraction methods were needed for the major sulfur forms in coal, including elemental, pyritic, sulfatic, and organic sulfur. This study was a continuation of that of previous analytical method development for sulfur forms in coal which resulted in the successful isolation and quantitation of elemental and sulfatic sulfur. Super- and subcritical extractions with methanol or water with and without additives were investigated in an attempt to develop methods for pyritic and organic sulfur forms analysis in coal. Based on these studies, a sequential extraction scheme that is capable of selectively determining elemental, sulfatic, pyritic and two forms of organic sulfur is presented here.
Social Bookmarking Induced Active Page Ranking
NASA Astrophysics Data System (ADS)
Takahashi, Tsubasa; Kitagawa, Hiroyuki; Watanabe, Keita
Social bookmarking services have recently made it possible for us to register and share our own bookmarks on the web and are attracting attention. The services let us get structured data: (URL, Username, Timestamp, Tag Set). And these data represent user interest in web pages. The number of bookmarks is a barometer of web page value. Some web pages have many bookmarks, but most of those bookmarks may have been posted far in the past. Therefore, even if a web page has many bookmarks, their value is not guaranteed. If most of the bookmarks are very old, the page may be obsolete. In this paper, by focusing on the timestamp sequence of social bookmarkings on web pages, we model their activation levels representing current values. Further, we improve our previously proposed ranking method for web search by introducing the activation level concept. Finally, through experiments, we show effectiveness of the proposed ranking method.
Bayesian Inference of Natural Rankings in Incomplete Competition Networks
Park, Juyong; Yook, Soon-Hyung
2014-01-01
Competition between a complex system's constituents and a corresponding reward mechanism based on it have profound influence on the functioning, stability, and evolution of the system. But determining the dominance hierarchy or ranking among the constituent parts from the strongest to the weakest – essential in determining reward and penalty – is frequently an ambiguous task due to the incomplete (partially filled) nature of competition networks. Here we introduce the “Natural Ranking,” an unambiguous ranking method applicable to a round robin tournament, and formulate an analytical model based on the Bayesian formula for inferring the expected mean and error of the natural ranking of nodes from an incomplete network. We investigate its potential and uses in resolving important issues of ranking by applying it to real-world competition networks. PMID:25163528
Ranking nodes in growing networks: When PageRank fails
Mariani, Manuel Sebastian; Medo, Matúš; Zhang, Yi-Cheng
2015-01-01
PageRank is arguably the most popular ranking algorithm which is being applied in real systems ranging from information to biological and infrastructure networks. Despite its outstanding popularity and broad use in different areas of science, the relation between the algorithm’s efficacy and properties of the network on which it acts has not yet been fully understood. We study here PageRank’s performance on a network model supported by real data, and show that realistic temporal effects make PageRank fail in individuating the most valuable nodes for a broad range of model parameters. Results on real data are in qualitative agreement with our model-based findings. This failure of PageRank reveals that the static approach to information filtering is inappropriate for a broad class of growing systems, and suggest that time-dependent algorithms that are based on the temporal linking patterns of these systems are needed to better rank the nodes. PMID:26553630
Ranking structures and rank-rank correlations of countries: The FIFA and UEFA cases
NASA Astrophysics Data System (ADS)
Ausloos, Marcel; Cloots, Rudi; Gadomski, Adam; Vitanov, Nikolay K.
2014-04-01
Ranking of agents competing with each other in complex systems may lead to paradoxes according to the pre-chosen different measures. A discussion is presented on such rank-rank, similar or not, correlations based on the case of European countries ranked by UEFA and FIFA from different soccer competitions. The first question to be answered is whether an empirical and simple law is obtained for such (self-) organizations of complex sociological systems with such different measuring schemes. It is found that the power law form is not the best description contrary to many modern expectations. The stretched exponential is much more adequate. Moreover, it is found that the measuring rules lead to some inner structures in both cases.
A cautionary note on the rank product statistic.
Koziol, James A
2016-06-01
The rank product method introduced by Breitling R et al. [2004, FEBS Letters 573, 83-92] has rapidly generated popularity in practical settings, in particular, detecting differential expression of genes in microarray experiments. The purpose of this note is to point out a particular property of the rank product method, namely, its differential sensitivity to over- and underexpression. It turns out that overexpression is less likely to be detected than underexpression with the rank product statistic. We have conducted both empirical and exact power studies that demonstrate this phenomenon, and summarize these findings in this note. PMID:27160968
Compressive Sensing via Nonlocal Smoothed Rank Function.
Fan, Ya-Ru; Huang, Ting-Zhu; Liu, Jun; Zhao, Xi-Le
2016-01-01
Compressive sensing (CS) theory asserts that we can reconstruct signals and images with only a small number of samples or measurements. Recent works exploiting the nonlocal similarity have led to better results in various CS studies. To better exploit the nonlocal similarity, in this paper, we propose a non-convex smoothed rank function based model for CS image reconstruction. We also propose an efficient alternating minimization method to solve the proposed model, which reduces a difficult and coupled problem to two tractable subproblems. Experimental results have shown that the proposed method performs better than several existing state-of-the-art CS methods for image reconstruction. PMID:27583683
ERIC Educational Resources Information Center
Castro-Pinero, Jose; Gonzalez-Montesinos, Jose Luis; Keating, Xiaofen D.; Mora, Jesus; Sjostrom, Michael; Ruiz, Jonatan R.
2010-01-01
The aim of this study was to provide percentile values for six different sprint tests in 2,708 Spanish children (1,234 girls) ages 6-17.9 years. We also examined the influence of weight status on sprint performance across age groups, with a focus on underweight and obese groups. We used the 20-m, 30-m, and 50-m running sprint standing start and…
Relevance Preserving Projection and Ranking for Web Image Search Reranking.
Ji, Zhong; Pang, Yanwei; Li, Xuelong
2015-11-01
An image search reranking (ISR) technique aims at refining text-based search results by mining images' visual content. Feature extraction and ranking function design are two key steps in ISR. Inspired by the idea of hypersphere in one-class classification, this paper proposes a feature extraction algorithm named hypersphere-based relevance preserving projection (HRPP) and a ranking function called hypersphere-based rank (H-Rank). Specifically, an HRPP is a spectral embedding algorithm to transform an original high-dimensional feature space into an intrinsically low-dimensional hypersphere space by preserving the manifold structure and a relevance relationship among the images. An H-Rank is a simple but effective ranking algorithm to sort the images by their distances to the hypersphere center. Moreover, to capture the user's intent with minimum human interaction, a reversed k-nearest neighbor (KNN) algorithm is proposed, which harvests enough pseudorelevant images by requiring that the user gives only one click on the initially searched images. The HRPP method with reversed KNN is named one-click-based HRPP (OC-HRPP). Finally, an OC-HRPP algorithm and the H-Rank algorithm form a new ISR method, H-reranking. Extensive experimental results on three large real-world data sets show that the proposed algorithms are effective. Moreover, the fact that only one relevant image is required to be labeled makes it has a strong practical significance. PMID:26011885
Rank in Class and College Admission
ERIC Educational Resources Information Center
Walker, Karen
2010-01-01
Traditionally class rankings have been used by high schools to determine valedictorians and salutatorians. These rankings have also been used by colleges to make admission decisions and for awarding scholarships. While there is no direct link between college rank and college admission, there is evidence that not using class rank can reduce stress…
The Globalization of College and University Rankings
ERIC Educational Resources Information Center
Altbach, Philip G.
2012-01-01
In the era of globalization, accountability, and benchmarking, university rankings have achieved a kind of iconic status. The major ones--the Academic Ranking of World Universities (ARWU, or the "Shanghai rankings"), the QS (Quacquarelli Symonds Limited) World University Rankings, and the "Times Higher Education" World University Rankings…
Vickers, Linda D
2010-05-01
This paper describes the method using Microsoft Excel (Microsoft Corporation One Microsoft Way Redmond, WA 98052-6399) to compute the 5% overall site X/Q value and the 95th percentile of the distribution of doses to the nearest maximally exposed offsite individual (MEOI) in accordance with guidance from DOE-STD-3009-1994 and U.S. NRC Regulatory Guide 1.145-1982. The accurate determination of the 5% overall site X/Q value is the most important factor in the computation of the 95th percentile of the distribution of doses to the nearest MEOI. This method should be used to validate software codes that compute the X/Q. The 95th percentile of the distribution of doses to the nearest MEOI must be compared to the U.S. DOE Evaluation Guide of 25 rem to determine the relative severity of hazard to the public from a postulated, unmitigated design basis accident that involves an offsite release of radioactive material. PMID:20386192
Time evolution of Wikipedia network ranking
NASA Astrophysics Data System (ADS)
Eom, Young-Ho; Frahm, Klaus M.; Benczúr, András; Shepelyansky, Dima L.
2013-12-01
We study the time evolution of ranking and spectral properties of the Google matrix of English Wikipedia hyperlink network during years 2003-2011. The statistical properties of ranking of Wikipedia articles via PageRank and CheiRank probabilities, as well as the matrix spectrum, are shown to be stabilized for 2007-2011. A special emphasis is done on ranking of Wikipedia personalities and universities. We show that PageRank selection is dominated by politicians while 2DRank, which combines PageRank and CheiRank, gives more accent on personalities of arts. The Wikipedia PageRank of universities recovers 80% of top universities of Shanghai ranking during the considered time period.