Chen, Jiaqing; Zhang, Pei; Lv, Mengying; Guo, Huimin; Huang, Yin; Zhang, Zunjian; Xu, Fengguo
2017-05-16
Data reduction techniques in gas chromatography-mass spectrometry-based untargeted metabolomics has made the following workflow of data analysis more lucid. However, the normalization process still perplexes researchers, and its effects are always ignored. In order to reveal the influences of normalization method, five representative normalization methods (mass spectrometry total useful signal, median, probabilistic quotient normalization, remove unwanted variation-random, and systematic ratio normalization) were compared in three real data sets with different types. First, data reduction techniques were used to refine the original data. Then, quality control samples and relative log abundance plots were utilized to evaluate the unwanted variations and the efficiencies of normalization process. Furthermore, the potential biomarkers which were screened out by the Mann-Whitney U test, receiver operating characteristic curve analysis, random forest, and feature selection algorithm Boruta in different normalized data sets were compared. The results indicated the determination of the normalization method was difficult because the commonly accepted rules were easy to fulfill but different normalization methods had unforeseen influences on both the kind and number of potential biomarkers. Lastly, an integrated strategy for normalization method selection was recommended.
An efficient algorithm for generating random number pairs drawn from a bivariate normal distribution
NASA Technical Reports Server (NTRS)
Campbell, C. W.
1983-01-01
An efficient algorithm for generating random number pairs from a bivariate normal distribution was developed. Any desired value of the two means, two standard deviations, and correlation coefficient can be selected. Theoretically the technique is exact and in practice its accuracy is limited only by the quality of the uniform distribution random number generator, inaccuracies in computer function evaluation, and arithmetic. A FORTRAN routine was written to check the algorithm and good accuracy was obtained. Some small errors in the correlation coefficient were observed to vary in a surprisingly regular manner. A simple model was developed which explained the qualities aspects of the errors.
Vafaee, F; Rakhshan, V; Vafaei, M; Khoshhal, M
2012-03-01
The purpose of this study was to investigate whether 3D Master or VitaLumin shade guides could improve colour selection in individuals with normal and defective colour vision. First, colour perception of 260 dental students was evaluated. Afterwards, 9 colour blind and 9 matched normal subjects tried to detect colours of 10 randomly selected tabs from each kit and the correct/false answers were counted. Of the colour-defective subjects, 47.8% and 33.3% correctly detected the shade using 3D Master and VitaLumin, respectively. These statistics were 62.2% and 42.2% in normal subjects. In normal participants, but not in colour blind ones, 3D Master significantly improved shade matching accuracy compared to VitaLumin.
A new mosaic method for three-dimensional surface
NASA Astrophysics Data System (ADS)
Yuan, Yun; Zhu, Zhaokun; Ding, Yongjun
2011-08-01
Three-dimensional (3-D) data mosaic is a indispensable link in surface measurement and digital terrain map generation. With respect to the mosaic problem of the local unorganized cloud points with rude registration and mass mismatched points, a new mosaic method for 3-D surface based on RANSAC is proposed. Every circular of this method is processed sequentially by random sample with additional shape constraint, data normalization of cloud points, absolute orientation, data denormalization of cloud points, inlier number statistic, etc. After N random sample trials the largest consensus set is selected, and at last the model is re-estimated using all the points in the selected subset. The minimal subset is composed of three non-colinear points which form a triangle. The shape of triangle is considered in random sample selection in order to make the sample selection reasonable. A new coordinate system transformation algorithm presented in this paper is used to avoid the singularity. The whole rotation transformation between the two coordinate systems can be solved by twice rotations expressed by Euler angle vector, each rotation has explicit physical means. Both simulation and real data are used to prove the correctness and validity of this mosaic method. This method has better noise immunity due to its robust estimation property, and has high accuracy as the shape constraint is added to random sample and the data normalization added to the absolute orientation. This method is applicable for high precision measurement of three-dimensional surface and also for the 3-D terrain mosaic.
Shunting for normal pressure hydrocephalus (NPH).
Esmonde, T; Cooke, S
2002-01-01
Since the condition was first described in 1965, the syndrome of normal pressure hydrocephalus (NPH) has conventionally been managed by placement of a cerebrospinal fluid (CSF) shunt. To determine the effectiveness of shunting procedures in promoting stability or improvement in the neurological symptoms and signs of NPH. The trials were identified from a search of the Specialized Register of the Cochrane Dementia and Cognitive Improvement Group on 26 June 2001 using the terms 'shunt*' and 'normal pressure hydrocephalus'. Studies included for analysis were those involving the placement of a CSF shunt for the treatment of NPH as part of a randomized controlled trial. No data matching the selection criteria were found. No randomized controlled trials of shunt placement versus no shunt were found. There is no evidence to indicate whether placement of a shunt is effective in the management of NPH.
A survey of the accuracy of interpretation of intraoperative cholangiograms.
Sanjay, Pandanaboyana; Tagolao, Sherry; Dirkzwager, Ilse; Bartlett, Adam
2012-10-01
There are few data in the literature regarding the ability of surgical trainees and surgeons to correctly interpret intraoperative cholangiograms (IOCs) during laparoscopic cholecystectomy (LC). The aim of this study was to determine the accuracy of surgeons' interpretations of IOCs. Fifteen IOCs, depicting normal, variants of normal and abnormal anatomy, were sent electronically in random sequence to 20 surgical trainees and 20 consultant general surgeons. Information was also sought on the routine or selective use of IOC by respondents. The accuracy of IOC interpretation was poor. Only nine surgeons and nine trainees correctly interpreted the cholangiograms showing normal anatomy. Six consultant surgeons and five trainees correctly identified variants of normal anatomy on cholangiograms. Abnormal anatomy on cholangiograms was identified correctly by 18 consultant surgeons and 19 trainees. Routine IOC was practised by seven consultants and six trainees. There was no significant difference between those who performed routine and selective IOC with respect to correct identification of normal, variant and abnormal anatomy. The present study shows that the accuracy of detection of both normal and variants of normal anatomy was poor in all grades of surgeon irrespective of a policy of routine or selective IOC. Improving operators' understanding of biliary anatomy may help to increase the diagnostic accuracy of IOC interpretation. © 2012 International Hepato-Pancreato-Biliary Association.
A survey of the accuracy of interpretation of intraoperative cholangiograms
Sanjay, Pandanaboyana; Tagolao, Sherry; Dirkzwager, Ilse; Bartlett, Adam
2012-01-01
Objectives There are few data in the literature regarding the ability of surgical trainees and surgeons to correctly interpret intraoperative cholangiograms (IOCs) during laparoscopic cholecystectomy (LC). The aim of this study was to determine the accuracy of surgeons' interpretations of IOCs. Methods Fifteen IOCs, depicting normal, variants of normal and abnormal anatomy, were sent electronically in random sequence to 20 surgical trainees and 20 consultant general surgeons. Information was also sought on the routine or selective use of IOC by respondents. Results The accuracy of IOC interpretation was poor. Only nine surgeons and nine trainees correctly interpreted the cholangiograms showing normal anatomy. Six consultant surgeons and five trainees correctly identified variants of normal anatomy on cholangiograms. Abnormal anatomy on cholangiograms was identified correctly by 18 consultant surgeons and 19 trainees. Routine IOC was practised by seven consultants and six trainees. There was no significant difference between those who performed routine and selective IOC with respect to correct identification of normal, variant and abnormal anatomy. Conclusions The present study shows that the accuracy of detection of both normal and variants of normal anatomy was poor in all grades of surgeon irrespective of a policy of routine or selective IOC. Improving operators' understanding of biliary anatomy may help to increase the diagnostic accuracy of IOC interpretation. PMID:22954003
Warris, Sven; Boymans, Sander; Muiser, Iwe; Noback, Michiel; Krijnen, Wim; Nap, Jan-Peter
2014-01-13
Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification.
A software system for the simulation of chest lesions
NASA Astrophysics Data System (ADS)
Ryan, John T.; McEntee, Mark; Barrett, Saoirse; Evanoff, Michael; Manning, David; Brennan, Patrick
2007-03-01
We report on the development of a novel software tool for the simulation of chest lesions. This software tool was developed for use in our study to attain optimal ambient lighting conditions for chest radiology. This study involved 61 consultant radiologists from the American Board of Radiology. Because of its success, we intend to use the same tool for future studies. The software has two main functions: the simulation of lesions and retrieval of information for ROC (Receiver Operating Characteristic) and JAFROC (Jack-Knife Free Response ROC) analysis. The simulation layer operates by randomly selecting an image from a bank of reportedly normal chest x-rays. A random location is then generated for each lesion, which is checked against a reference lung-map. If the location is within the lung fields, as derived from the lung-map, a lesion is superimposed. Lesions are also randomly selected from a bank of manually created chest lesion images. A blending algorithm determines which are the best intensity levels for the lesion to sit naturally within the chest x-ray. The same software was used to run a study for all 61 radiologists. A sequence of images is displayed in random order. Half of these images had simulated lesions, ranging from subtle to obvious, and half of the images were normal. The operator then selects locations where he/she thinks lesions exist and grades the lesion accordingly. We have found that this software was very effective in this study and intend to use the same principles for future studies.
Valero, Manuel; Averkin, Robert G; Fernandez-Lamo, Ivan; Aguilar, Juan; Lopez-Pigozzi, Diego; Brotons-Mas, Jorge R; Cid, Elena; Tamas, Gabor; Menendez de la Prida, Liset
2017-06-21
Memory traces are reactivated selectively during sharp-wave ripples. The mechanisms of selective reactivation, and how degraded reactivation affects memory, are poorly understood. We evaluated hippocampal single-cell activity during physiological and pathological sharp-wave ripples using juxtacellular and intracellular recordings in normal and epileptic rats with different memory abilities. CA1 pyramidal cells participate selectively during physiological events but fired together during epileptic fast ripples. We found that firing selectivity was dominated by an event- and cell-specific synaptic drive, modulated in single cells by changes in the excitatory/inhibitory ratio measured intracellularly. This mechanism collapses during pathological fast ripples to exacerbate and randomize neuronal firing. Acute administration of a use- and cell-type-dependent sodium channel blocker reduced neuronal collapse and randomness and improved recall in epileptic rats. We propose that cell-specific synaptic inputs govern firing selectivity of CA1 pyramidal cells during sharp-wave ripples. Copyright © 2017 Elsevier Inc. All rights reserved.
Inada, Mitsuo; Iwasaki, Keiko; Imai, Chihiro; Hashimoto, Satoshi
2010-01-01
A bedridden 85-year-old woman had hyperpotassemia (7.7 mEq/L) and bradycardia (30/min). Endocrinologic findings revealed a decrease in the renin-aldosterone system and normal adrenoglucocorticoid function. The results were consistent with the abnormalities seen in selective hypoaldosteronism with low renin activity. In addition, 9 of 11 patients, selected randomly from 72 bedridden elderly patients with normal serum sodium and potassium levels in our hospital, had diminished plasma renin activity (PRA) and plasma aldosterone concentration (PAC). The present patient was prescribed nonsteroidal anti-inflammatory drug (NSAID). NSAID reduces renal potassium excretion through the inhibition of renal prostaglandin synthesis. Therefore, the use of NSAID in bedridden elderly patients might intensify the underlying asymptomatic hypoaldosteronism and cause life-threatening hyperpotassemia.
Hong-Seng, Gan; Sayuti, Khairil Amir; Karim, Ahmad Helmy Abdul
2017-01-01
Existing knee cartilage segmentation methods have reported several technical drawbacks. In essence, graph cuts remains highly susceptible to image noise despite extended research interest; active shape model is often constraint by the selection of training data while shortest path have demonstrated shortcut problem in the presence of weak boundary, which is a common problem in medical images. The aims of this study is to investigate the capability of random walks as knee cartilage segmentation method. Experts would scribble on knee cartilage image to initialize random walks segmentation. Then, reproducibility of the method is assessed against manual segmentation by using Dice Similarity Index. The evaluation consists of normal cartilage and diseased cartilage sections which is divided into whole and single cartilage categories. A total of 15 normal images and 10 osteoarthritic images were included. The results showed that random walks method has demonstrated high reproducibility in both normal cartilage (observer 1: 0.83±0.028 and observer 2: 0.82±0.026) and osteoarthritic cartilage (observer 1: 0.80±0.069 and observer 2: 0.83±0.029). Besides, results from both experts were found to be consistent with each other, suggesting the inter-observer variation is insignificant (Normal: P=0.21; Diseased: P=0.15). The proposed segmentation model has overcame technical problems reported by existing semi-automated techniques and demonstrated highly reproducible and consistent results against manual segmentation method.
2014-01-01
Background Small RNAs are important regulators of genome function, yet their prediction in genomes is still a major computational challenge. Statistical analyses of pre-miRNA sequences indicated that their 2D structure tends to have a minimal free energy (MFE) significantly lower than MFE values of equivalently randomized sequences with the same nucleotide composition, in contrast to other classes of non-coding RNA. The computation of many MFEs is, however, too intensive to allow for genome-wide screenings. Results Using a local grid infrastructure, MFE distributions of random sequences were pre-calculated on a large scale. These distributions follow a normal distribution and can be used to determine the MFE distribution for any given sequence composition by interpolation. It allows on-the-fly calculation of the normal distribution for any candidate sequence composition. Conclusion The speedup achieved makes genome-wide screening with this characteristic of a pre-miRNA sequence practical. Although this particular property alone will not be able to distinguish miRNAs from other sequences sufficiently discriminative, the MFE-based P-value should be added to the parameters of choice to be included in the selection of potential miRNA candidates for experimental verification. PMID:24418292
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
[Natural selection associated with color vision defects in some population groups of Eurasia].
Evsiukov, A N
2014-01-01
Fitness coefficients and other quantitative parameters of selection associated with the generalized color blindness gene CB+ were obtained for three ethnogeographic population groups, including Belarusians from Belarus, ethnic populations of the Volga-Ural region, and ethnic populations of Siberia and the Far East of Russia. All abnormalities encoded by the OPN1LW and OPN1MW loci were treated as deviations from normal color perception. Coefficients were estimated from an approximation of the observed CB+ frequency distributions to the theoretical stationary distribution for the Wright island model. This model takes into account the pressure of migrations, selection, and random genetic drift, while the selection parameters are represented in the form of the distribution parameters. In the populations of Siberia and Far East, directional selection in favor of normal color vision and the corresponding allele CB- was observed. In the Belarusian and ethnic populations of the Volga-Ural region, stabilizing selection was observed. The selection intensity constituted 0.03 in the Belarusian; 0.22 in the ethnic populations of the Volga-Ural region; and 0.24 in ethnic populations of Siberia and Far East.
Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369
Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.
Universal statistics of selected values
NASA Astrophysics Data System (ADS)
Smerlak, Matteo; Youssef, Ahmed
2017-03-01
Selection, the tendency of some traits to become more frequent than others under the influence of some (natural or artificial) agency, is a key component of Darwinian evolution and countless other natural and social phenomena. Yet a general theory of selection, analogous to the Fisher-Tippett-Gnedenko theory of extreme events, is lacking. Here we introduce a probabilistic definition of selection and show that selected values are attracted to a universal family of limiting distributions which generalize the log-normal distribution. The universality classes and scaling exponents are determined by the tail thickness of the random variable under selection. Our results provide a possible explanation for skewed distributions observed in diverse contexts where selection plays a key role, from molecular biology to agriculture and sport.
NASA Technical Reports Server (NTRS)
Joseph, Robert D.; Hora, Joseph; Stockton, Alan; Hu, Esther; Sanders, David
1997-01-01
This report concerns one of the major observational studies in the ISO Central Programme, the ISO Normal Galaxy Survey. This is a survey of an unbiased sample of spiral and lenticular galaxies selected from the Revised Shapley-Ames Catalog. It is therefore optically-selected, with a brightness limit of blue magnitude = 12, and otherwise randomly chosen. The original sample included 150 galaxies, but this was reduced to 74 when the allocated observing time was expended because the ISO overheads encountered in flight were much larger than predicted.
USDA-ARS?s Scientific Manuscript database
During normal bacterial DNA replication, gene duplication and amplification (GDA) events occur randomly at a low frequency in the genome throughout a population. In the absence of selection, GDA events that increase the number of copies of a bacterial gene (or a set of genes) are lost. Antibiotic ...
Robust Approach to Verifying the Weak Form of the Efficient Market Hypothesis
NASA Astrophysics Data System (ADS)
Střelec, Luboš
2011-09-01
The weak form of the efficient markets hypothesis states that prices incorporate only past information about the asset. An implication of this form of the efficient markets hypothesis is that one cannot detect mispriced assets and consistently outperform the market through technical analysis of past prices. One of possible formulations of the efficient market hypothesis used for weak form tests is that share prices follow a random walk. It means that returns are realizations of IID sequence of random variables. Consequently, for verifying the weak form of the efficient market hypothesis, we can use distribution tests, among others, i.e. some tests of normality and/or some graphical methods. Many procedures for testing the normality of univariate samples have been proposed in the literature [7]. Today the most popular omnibus test of normality for a general use is the Shapiro-Wilk test. The Jarque-Bera test is the most widely adopted omnibus test of normality in econometrics and related fields. In particular, the Jarque-Bera test (i.e. test based on the classical measures of skewness and kurtosis) is frequently used when one is more concerned about heavy-tailed alternatives. As these measures are based on moments of the data, this test has a zero breakdown value [2]. In other words, a single outlier can make the test worthless. The reason so many classical procedures are nonrobust to outliers is that the parameters of the model are expressed in terms of moments, and their classical estimators are expressed in terms of sample moments, which are very sensitive to outliers. Another approach to robustness is to concentrate on the parameters of interest suggested by the problem under this study. Consequently, novel robust testing procedures of testing normality are presented in this paper to overcome shortcomings of classical normality tests in the field of financial data, which are typical with occurrence of remote data points and additional types of deviations from normality. This study also discusses some results of simulation power studies of these tests for normality against selected alternatives. Based on outcome of the power simulation study, selected normality tests were consequently used to verify weak form of efficiency in Central Europe stock markets.
ERIC Educational Resources Information Center
MOAKLEY, FRANCIS X.
EFFECTS OF PERIODIC VARIATIONS IN AN INSTRUCTIONAL FILM'S NORMAL LOUDNESS LEVEL FOR RELEVANT AND IRRELEVANT FILM SEQUENCES WERE MEASURED BY A MULTIPLE CHOICE TEST. RIGOROUS PILOT STUDIES, RANDOM GROUPING OF SEVENTH GRADERS FOR TREATMENTS, AND RATINGS OF RELEVANT AND IRRELEVANT PORTIONS OF THE FILM BY AN UNSPECIFIED NUMBER OF JUDGES PRECEDED THE…
ERIC Educational Resources Information Center
Buium, Nissan; Turnure, James E.
In a replication of a similar study with American children, 56 normal native Israeli children (5-years-old) were studied to determine the universality of self-generated verbal mediators as a means of enhancing memory processes. Eight Ss, randomly selected, were assigned in each of the following conditions: labeling, sentence generation, listening…
ERIC Educational Resources Information Center
Malone, Molly
2012-01-01
Most middle school students comprehend that organisms have adaptations that enable their survival and that successful adaptations prevail in a population over time. Yet they often miss that those bird beaks, moth-wing colors, or whatever traits are the result of random, normal genetic variations that just happen to confer a negative, neutral, or…
Wang, Li; Zhang, Yaoyun; Jiang, Min; Wang, Jingqi; Dong, Jiancheng; Liu, Yun; Tao, Cui; Jiang, Guoqian; Zhou, Yi; Xu, Hua
2018-07-01
In recent years, electronic health record systems have been widely implemented in China, making clinical data available electronically. However, little effort has been devoted to making drug information exchangeable among these systems. This study aimed to build a Normalized Chinese Clinical Drug (NCCD) knowledge base, by applying and extending the information model of RxNorm to Chinese clinical drugs. Chinese drugs were collected from 4 major resources-China Food and Drug Administration, China Health Insurance Systems, Hospital Pharmacy Systems, and China Pharmacopoeia-for integration and normalization in NCCD. Chemical drugs were normalized using the information model in RxNorm without much change. Chinese patent drugs (i.e., Chinese herbal extracts), however, were represented using an expanded RxNorm model to incorporate the unique characteristics of these drugs. A hybrid approach combining automated natural language processing technologies and manual review by domain experts was then applied to drug attribute extraction, normalization, and further generation of drug names at different specification levels. Lastly, we reported the statistics of NCCD, as well as the evaluation results using several sets of randomly selected Chinese drugs. The current version of NCCD contains 16 976 chemical drugs and 2663 Chinese patent medicines, resulting in 19 639 clinical drugs, 250 267 unique concepts, and 2 602 760 relations. By manual review of 1700 chemical drugs and 250 Chinese patent drugs randomly selected from NCCD (about 10%), we showed that the hybrid approach could achieve an accuracy of 98.60% for drug name extraction and normalization. Using a collection of 500 chemical drugs and 500 Chinese patent drugs from other resources, we showed that NCCD achieved coverages of 97.0% and 90.0% for chemical drugs and Chinese patent drugs, respectively. Evaluation results demonstrated the potential to improve interoperability across various electronic drug systems in China.
Rapid adaptation to mammalian sociality via sexually selected traits
2013-01-01
Background Laboratory studies show that the components of sexual selection (e.g., mate choice and intrasexual competition) can profoundly affect the development and fitness of offspring. Less is known, however, about the total effects of sexual selection on offspring in normal social conditions. Complex social networks, such as dominance hierarchies, regulate the opportunity for mating success, and are often missing from laboratory studies. Social selection is an extended view of sexual selection that incorporates competition during sexual and nonsexual interactions, and predicts complex evolutionary dynamics. Whether social selection improves or constrains offspring fitness is controversial. Results To identify fitness consequences of social selection, wild-derived mice that had bred under laboratory conditions for eight generations were re-introduced to naturalistic competition in enclosures for three consecutive generations (promiscuous line). In parallel, a control lineage bred in cages under random mate assignment (monogamous line). A direct competition experiment using second-generation animals revealed that promiscuous line males had greater reproductive success than monogamous line males (particularly during extra-territorial matings), in spite of higher mortality and equivalent success in social dominance and sperm competition. There were no major female fitness effects (though promiscuous line females had fewer litters than monogamous line females). This result suggested that selection primarily acted upon a sexually attractive male phenotype in the promiscuous line, a hypothesis we confirmed in female odor and mating preference trials. Conclusions We present novel evidence for the strength of sexual selection under normal social conditions, and show rapid male adaptation driven largely by sexual trait expression, with tradeoffs in survivorship and female fecundity. Re-introducing wild-derived mice to competition quickly uncovers sexually selected phenotypes otherwise lost in normal colony breeding. PMID:23577674
Gorodeski, Eiran Z.; Ishwaran, Hemant; Kogalur, Udaya B.; Blackstone, Eugene H.; Hsich, Eileen; Zhang, Zhu-ming; Vitolins, Mara Z.; Manson, JoAnn E.; Curb, J. David; Martin, Lisa W.; Prineas, Ronald J.; Lauer, Michael S.
2013-01-01
Background Simultaneous contribution of hundreds of electrocardiographic biomarkers to prediction of long-term mortality in post-menopausal women with clinically normal resting electrocardiograms (ECGs) is unknown. Methods and Results We analyzed ECGs and all-cause mortality in 33,144 women enrolled in Women’s Health Initiative trials, who were without baseline cardiovascular disease or cancer, and had normal ECGs by Minnesota and Novacode criteria. Four hundred and seventy seven ECG biomarkers, encompassing global and individual ECG findings, were measured using computer algorithms. During a median follow-up of 8.1 years (range for survivors 0.5–11.2 years), 1,229 women died. For analyses cohort was randomly split into derivation (n=22,096, deaths=819) and validation (n=11,048, deaths=410) subsets. ECG biomarkers, demographic, and clinical characteristics were simultaneously analyzed using both traditional Cox regression and Random Survival Forest (RSF), a novel algorithmic machine-learning approach. Regression modeling failed to converge. RSF variable selection yielded 20 variables that were independently predictive of long-term mortality, 14 of which were ECG biomarkers related to autonomic tone, atrial conduction, and ventricular depolarization and repolarization. Conclusions We identified 14 ECG biomarkers from amongst hundreds that were associated with long-term prognosis using a novel random forest variable selection methodology. These were related to autonomic tone, atrial conduction, ventricular depolarization, and ventricular repolarization. Quantitative ECG biomarkers have prognostic importance, and may be markers of subclinical disease in apparently healthy post-menopausal women. PMID:21862719
ERIC Educational Resources Information Center
Pierce, Thomas B., Jr.; And Others
1990-01-01
A survey assessed time spent in the community and/or on unstructured activities by randomly selected individuals in Intermediate Care Facilities for the Mentally Retarded (ICF/MR) (N=20) or minigroup home settings (N=20). Individuals in ICF/MR homes spent more time in the community with staff and made fewer choices of unstructured activities.…
ERIC Educational Resources Information Center
Rodenborn, Leo V., Jr.
The project's purpose was to determine whether attention to the task during testing was a confounding variable in measures of visual perception ability. Samples of 30 perceptually handicapped (PH) and 30 normal subjects (N) were randomly selected from children so classified on the Frostig DTVP, providing they had IQ scores between 85 and 115 on…
Variable size computer-aided detection prompts and mammography film reader decisions
Gilbert, Fiona J; Astley, Susan M; Boggis, Caroline RM; McGee, Magnus A; Griffiths, Pamela M; Duffy, Stephen W; Agbaje, Olorunsola F; Gillan, Maureen GC; Wilson, Mary; Jain, Anil K; Barr, Nicola; Beetles, Ursula M; Griffiths, Miriam A; Johnson, Jill; Roberts, Rita M; Deans, Heather E; Duncan, Karen A; Iyengar, Geeta
2008-01-01
Introduction The purpose of the present study was to investigate the effect of computer-aided detection (CAD) prompts on reader behaviour in a large sample of breast screening mammograms by analysing the relationship of the presence and size of prompts to the recall decision. Methods Local research ethics committee approval was obtained; informed consent was not required. Mammograms were obtained from women attending routine mammography at two breast screening centres in 1996. Films, previously double read, were re-read by a different reader using CAD. The study material included 315 cancer cases comprising all screen-detected cancer cases, all subsequent interval cancers and 861 normal cases randomly selected from 10,267 cases. Ground truth data were used to assess the efficacy of CAD prompting. Associations between prompt attributes and tumour features or reader recall decisions were assessed by chi-squared tests. Results There was a highly significant relationship between prompting and a decision to recall for cancer cases and for a random sample of normal cases (P < 0.001). Sixty-four per cent of all cases contained at least one CAD prompt. In cancer cases, larger prompts were more likely to be recalled (P = 0.02) for masses but there was no such association for calcifications (P = 0.9). In a random sample of 861 normal cases, larger prompts were more likely to be recalled (P = 0.02) for both mass and calcification prompts. Significant associations were observed with prompting and breast density (p = 0.009) for cancer cases but not for normal cases (P = 0.05). Conclusions For both normal cases and cancer cases, prompted mammograms were more likely to be recalled and the prompt size was also associated with a recall decision. PMID:18724867
Variable size computer-aided detection prompts and mammography film reader decisions.
Gilbert, Fiona J; Astley, Susan M; Boggis, Caroline Rm; McGee, Magnus A; Griffiths, Pamela M; Duffy, Stephen W; Agbaje, Olorunsola F; Gillan, Maureen Gc; Wilson, Mary; Jain, Anil K; Barr, Nicola; Beetles, Ursula M; Griffiths, Miriam A; Johnson, Jill; Roberts, Rita M; Deans, Heather E; Duncan, Karen A; Iyengar, Geeta
2008-01-01
The purpose of the present study was to investigate the effect of computer-aided detection (CAD) prompts on reader behaviour in a large sample of breast screening mammograms by analysing the relationship of the presence and size of prompts to the recall decision. Local research ethics committee approval was obtained; informed consent was not required. Mammograms were obtained from women attending routine mammography at two breast screening centres in 1996. Films, previously double read, were re-read by a different reader using CAD. The study material included 315 cancer cases comprising all screen-detected cancer cases, all subsequent interval cancers and 861 normal cases randomly selected from 10,267 cases. Ground truth data were used to assess the efficacy of CAD prompting. Associations between prompt attributes and tumour features or reader recall decisions were assessed by chi-squared tests. There was a highly significant relationship between prompting and a decision to recall for cancer cases and for a random sample of normal cases (P < 0.001). Sixty-four per cent of all cases contained at least one CAD prompt. In cancer cases, larger prompts were more likely to be recalled (P = 0.02) for masses but there was no such association for calcifications (P = 0.9). In a random sample of 861 normal cases, larger prompts were more likely to be recalled (P = 0.02) for both mass and calcification prompts. Significant associations were observed with prompting and breast density (p = 0.009) for cancer cases but not for normal cases (P = 0.05). For both normal cases and cancer cases, prompted mammograms were more likely to be recalled and the prompt size was also associated with a recall decision.
Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab
2015-01-01
Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. The results showed that there are significance differences between two groups in low registration (P < 0.020, F = 5.66), sensation seeking (P < 0.050, F = 1.92), and sensory avoidance (P < 0.008, F = 7.52) as a components of sensory processing and difficulty in describing emotions (P < 0.001, F = 15.01) and difficulty in identifying emotions (P < 0.002, F = 10.54) as a components of alexithymia. However, no significant difference were found between two groups in components of sensory sensitivity (P < 0.170, F = 1.92) and external oriented thinking style (P < 0.060, F = 3.60). These results showed that substance-dependent people process sensory information in a different way than normal people and show more alexithymia features than them.
Bashapoor, Sajjad; Hosseini-Kiasari, Seyyedeh Tayebeh; Daneshvar, Somayeh; Kazemi-Taskooh, Zeinab
2015-01-01
Background Sensory information processing and alexithymia are two important factors in determining behavioral reactions. Some studies explain the effect of the sensitivity of sensory processing and alexithymia in the tendency to substance abuse. Giving that, the aim of the current study was to compare the styles of sensory information processing and alexithymia between substance-dependent people and normal ones. Methods The research method was cross-sectional and the statistical population of the current study comprised of all substance-dependent men who are present in substance quitting camps of Masal, Iran, in October 2013 (n = 78). 36 persons were selected randomly by simple randomly sampling method from this population as the study group, and 36 persons were also selected among the normal population in the same way as the comparison group. Both groups was evaluated by using Toronto alexithymia scale (TAS) and adult sensory profile, and the multivariate analysis of variance (MANOVA) test was applied to analyze data. Findings The results showed that there are significance differences between two groups in low registration (P < 0.020, F = 5.66), sensation seeking (P < 0.050, F = 1.92), and sensory avoidance (P < 0.008, F = 7.52) as a components of sensory processing and difficulty in describing emotions (P < 0.001, F = 15.01) and difficulty in identifying emotions (P < 0.002, F = 10.54) as a components of alexithymia. However, no significant difference were found between two groups in components of sensory sensitivity (P < 0.170, F = 1.92) and external oriented thinking style (P < 0.060, F = 3.60). Conclusion These results showed that substance-dependent people process sensory information in a different way than normal people and show more alexithymia features than them. PMID:26885354
A New Approach to Extreme Value Estimation Applicable to a Wide Variety of Random Variables
NASA Technical Reports Server (NTRS)
Holland, Frederic A., Jr.
1997-01-01
Designing reliable structures requires an estimate of the maximum and minimum values (i.e., strength and load) that may be encountered in service. Yet designs based on very extreme values (to insure safety) can result in extra material usage and hence, uneconomic systems. In aerospace applications, severe over-design cannot be tolerated making it almost mandatory to design closer to the assumed limits of the design random variables. The issue then is predicting extreme values that are practical, i.e. neither too conservative or non-conservative. Obtaining design values by employing safety factors is well known to often result in overly conservative designs and. Safety factor values have historically been selected rather arbitrarily, often lacking a sound rational basis. To answer the question of how safe a design needs to be has lead design theorists to probabilistic and statistical methods. The so-called three-sigma approach is one such method and has been described as the first step in utilizing information about the data dispersion. However, this method is based on the assumption that the random variable is dispersed symmetrically about the mean and is essentially limited to normally distributed random variables. Use of this method can therefore result in unsafe or overly conservative design allowables if the common assumption of normality is incorrect.
Qiu, Xing; Hu, Rui; Wu, Zhixin
2014-01-01
Normalization procedures are widely used in high-throughput genomic data analyses to remove various technological noise and variations. They are known to have profound impact to the subsequent gene differential expression analysis. Although there has been some research in evaluating different normalization procedures, few attempts have been made to systematically evaluate the gene detection performances of normalization procedures from the bias-variance trade-off point of view, especially with strong gene differentiation effects and large sample size. In this paper, we conduct a thorough study to evaluate the effects of normalization procedures combined with several commonly used statistical tests and MTPs under different configurations of effect size and sample size. We conduct theoretical evaluation based on a random effect model, as well as simulation and biological data analyses to verify the results. Based on our findings, we provide some practical guidance for selecting a suitable normalization procedure under different scenarios. PMID:24941114
A Postmortem Study of Frontal and Temporal Gyri Thickness and Cell Number in Human Obesity.
Gómez-Apo, Erick; García-Sierra, Adrián; Silva-Pereyra, Juan; Soto-Abraham, Virgilia; Mondragón-Maya, Alejandra; Velasco-Vales, Verónica; Pescatello, Linda S
2018-01-01
This study aimed to compare cortex thickness and neuronal cell density in postmortem brain tissue from people with overweight or obesity and normal weight. The cortex thickness and neuron density of eight donors with overweight or obesity (mean = 31.6 kg/m 2 ; SD = 4.35; n = 8; 6 male) and eight donors with normal weight (mean = 21.8 kg/m 2 ; SD = 1.5; n = 8; 5 male) were compared. All participants were Mexican and lived in Mexico City. Randomly selected thickness measures of different cortex areas from the frontal and temporal lobes were analyzed based on high-resolution real-size photographs. A histological analysis of systematic-random fields was used to quantify the number of neurons in postmortem left and right of the first, second, and third gyri of frontal and temporal lobe brain samples. No statistical difference was found in cortical thickness between donors with overweight or obesity and individuals with normal weight. A smaller number of neurons was found among the donors with overweight or obesity than the donors with normal weight at different frontal and temporal areas. A lower density of neurons is associated with overweight or obesity. The morphological basis for structural brain changes in obesity requires further investigation. © 2017 The Obesity Society.
Randomized controlled trial of a treatment for anorexia and bulimia nervosa
Bergh, Cecilia; Brodin, Ulf; Lindberg, Greger; Södersten, Per
2002-01-01
Evidence for the effectiveness of existing treatments of patients with eating disorders is weak. Here we describe and evaluate a method of treatment in a randomized controlled trial. Sixteen patients, randomly selected out of a group composed of 19 patients with anorexia nervosa and 13 with bulimia nervosa, were trained to eat and recognize satiety by using computer support. They rested in a warm room after eating, and their physical activity was restricted. The patients in the control group (n = 16) received no treatment. Remission was defined by normal body weight (anorexia), cessation of binge eating and purging (bulimia), a normal psychiatric profile, normal laboratory test values, normal eating behavior, and resumption of social activities. Fourteen patients went into remission after a median of 14.4 months (range 4.9–26.5) of treatment, but only one patient went into remission while waiting for treatment (P = 0.0057). Relapse is considered a major problem in patients who have been treated to remission. We therefore report results on a total of 168 patients who have entered our treatment program. The estimated rate of remission was 75%, and estimated time to remission was 14.7 months (quartile range 9.6 ≥ 32). Six patients (7%) of 83 who were treated to remission relapsed, but the others (93%) have remained in remission for 12 months (quartile range 6–36). Because the risk of relapse is maximal in the first year after remission, we suggest that most patients treated with this method recover. PMID:12082182
Heo, Moonseong; Meissner, Paul; Litwin, Alain H; Arnsten, Julia H; McKee, M Diane; Karasz, Alison; McKinley, Paula; Rehm, Colin D; Chambers, Earle C; Yeh, Ming-Chin; Wylie-Rosett, Judith
2017-01-01
Comparative effectiveness research trials in real-world settings may require participants to choose between preferred intervention options. A randomized clinical trial with parallel experimental and control arms is straightforward and regarded as a gold standard design, but by design it forces and anticipates the participants to comply with a randomly assigned intervention regardless of their preference. Therefore, the randomized clinical trial may impose impractical limitations when planning comparative effectiveness research trials. To accommodate participants' preference if they are expressed, and to maintain randomization, we propose an alternative design that allows participants' preference after randomization, which we call a "preference option randomized design (PORD)". In contrast to other preference designs, which ask whether or not participants consent to the assigned intervention after randomization, the crucial feature of preference option randomized design is its unique informed consent process before randomization. Specifically, the preference option randomized design consent process informs participants that they can opt out and switch to the other intervention only if after randomization they actively express the desire to do so. Participants who do not independently express explicit alternate preference or assent to the randomly assigned intervention are considered to not have an alternate preference. In sum, preference option randomized design intends to maximize retention, minimize possibility of forced assignment for any participants, and to maintain randomization by allowing participants with no or equal preference to represent random assignments. This design scheme enables to define five effects that are interconnected with each other through common design parameters-comparative, preference, selection, intent-to-treat, and overall/as-treated-to collectively guide decision making between interventions. Statistical power functions for testing all these effects are derived, and simulations verified the validity of the power functions under normal and binomial distributions.
Gratings and Random Reflectors for Near-Infrared PIN Diodes
NASA Technical Reports Server (NTRS)
Gunapala, Sarath; Bandara, Sumith; Liu, John; Ting, David
2007-01-01
Crossed diffraction gratings and random reflectors have been proposed as means to increase the quantum efficiencies of InGaAs/InP positive/intrinsic/ negative (PIN) diodes designed to operate as near-infrared photodetectors. The proposal is meant especially to apply to focal-plane imaging arrays of such photodetectors to be used for near-infrared imaging. A further increase in quantum efficiency near the short-wavelength limit of the near-infrared spectrum of such a photodetector array could be effected by removing the InP substrate of the array. The use of crossed diffraction gratings and random reflectors as optical devices for increasing the quantum efficiencies of quantum-well infrared photodetectors (QWIPs) was discussed in several prior NASA Tech Briefs articles. While the optical effects of crossed gratings and random reflectors as applied to PIN photodiodes would be similar to those of crossed gratings and random reflectors as applied to QWIPs, the physical mechanisms by which these optical effects would enhance efficiency differ between the PIN-photodiode and QWIP cases: In a QWIP, the multiple-quantum-well layers are typically oriented parallel to the focal plane and therefore perpendicular or nearly perpendicular to the direction of incidence of infrared light. By virtue of the applicable quantum selection rules, light polarized parallel to the focal plane (as normally incident light is) cannot excite charge carriers and, hence, cannot be detected. A pair of crossed gratings or a random reflector scatters normally or nearly normally incident light so that a significant portion of it attains a component of polarization normal to the focal plane and, hence, can excite charge carriers. A pair of crossed gratings or a random reflector on a PIN photodiode would also scatter light into directions away from the perpendicular to the focal plane. However, in this case, the reason for redirecting light away from the perpendicular is to increase the length of the optical path through the detector to increase the probability of absorption of photons and thereby increase the resulting excitation of charge carriers. A pair of crossed gratings or a random reflector according to the proposal would be fabricated as an integral part of photodetector structure on the face opposite the focal plane (see figure). In the presence of crossed gratings, light would make four passes through the device before departing. In the presence of a random reflector, a significant portion of the light would make more than four passes: After each bounce, light would be scattered at a different random angle, and would have a chance to escape only when it was reflected, relative to the normal, at an angle less than the critical angle for total internal reflection. Given the indices of refraction of the photodiode materials, this angle would be about 17 . This amounts to a very narrow cone for escape of trapped light.
MicroRNA array normalization: an evaluation using a randomized dataset as the benchmark.
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays.
MicroRNA Array Normalization: An Evaluation Using a Randomized Dataset as the Benchmark
Qin, Li-Xuan; Zhou, Qin
2014-01-01
MicroRNA arrays possess a number of unique data features that challenge the assumption key to many normalization methods. We assessed the performance of existing normalization methods using two microRNA array datasets derived from the same set of tumor samples: one dataset was generated using a blocked randomization design when assigning arrays to samples and hence was free of confounding array effects; the second dataset was generated without blocking or randomization and exhibited array effects. The randomized dataset was assessed for differential expression between two tumor groups and treated as the benchmark. The non-randomized dataset was assessed for differential expression after normalization and compared against the benchmark. Normalization improved the true positive rate significantly in the non-randomized data but still possessed a false discovery rate as high as 50%. Adding a batch adjustment step before normalization further reduced the number of false positive markers while maintaining a similar number of true positive markers, which resulted in a false discovery rate of 32% to 48%, depending on the specific normalization method. We concluded the paper with some insights on possible causes of false discoveries to shed light on how to improve normalization for microRNA arrays. PMID:24905456
Chen, Bor-Sen; Tsai, Kun-Wei; Li, Cheng-Wei
2015-01-01
Molecular biologists have long recognized carcinogenesis as an evolutionary process that involves natural selection. Cancer is driven by the somatic evolution of cell lineages. In this study, the evolution of somatic cancer cell lineages during carcinogenesis was modeled as an equilibrium point (ie, phenotype of attractor) shifting, the process of a nonlinear stochastic evolutionary biological network. This process is subject to intrinsic random fluctuations because of somatic genetic and epigenetic variations, as well as extrinsic disturbances because of carcinogens and stressors. In order to maintain the normal function (ie, phenotype) of an evolutionary biological network subjected to random intrinsic fluctuations and extrinsic disturbances, a network robustness scheme that incorporates natural selection needs to be developed. This can be accomplished by selecting certain genetic and epigenetic variations to modify the network structure to attenuate intrinsic fluctuations efficiently and to resist extrinsic disturbances in order to maintain the phenotype of the evolutionary biological network at an equilibrium point (attractor). However, during carcinogenesis, the remaining (or neutral) genetic and epigenetic variations accumulate, and the extrinsic disturbances become too large to maintain the normal phenotype at the desired equilibrium point for the nonlinear evolutionary biological network. Thus, the network is shifted to a cancer phenotype at a new equilibrium point that begins a new evolutionary process. In this study, the natural selection scheme of an evolutionary biological network of carcinogenesis was derived from a robust negative feedback scheme based on the nonlinear stochastic Nash game strategy. The evolvability and phenotypic robustness criteria of the evolutionary cancer network were also estimated by solving a Hamilton–Jacobi inequality – constrained optimization problem. The simulation revealed that the phenotypic shift of the lung cancer-associated cell network takes 54.5 years from a normal state to stage I cancer, 1.5 years from stage I to stage II cancer, and 2.5 years from stage II to stage III cancer, with a reasonable match for the statistical result of the average age of lung cancer. These results suggest that a robust negative feedback scheme, based on a stochastic evolutionary game strategy, plays a critical role in an evolutionary biological network of carcinogenesis under a natural selection scheme. PMID:26244004
Evolving optimised decision rules for intrusion detection using particle swarm paradigm
NASA Astrophysics Data System (ADS)
Sivatha Sindhu, Siva S.; Geetha, S.; Kannan, A.
2012-12-01
The aim of this article is to construct a practical intrusion detection system (IDS) that properly analyses the statistics of network traffic pattern and classify them as normal or anomalous class. The objective of this article is to prove that the choice of effective network traffic features and a proficient machine-learning paradigm enhances the detection accuracy of IDS. In this article, a rule-based approach with a family of six decision tree classifiers, namely Decision Stump, C4.5, Naive Baye's Tree, Random Forest, Random Tree and Representative Tree model to perform the detection of anomalous network pattern is introduced. In particular, the proposed swarm optimisation-based approach selects instances that compose training set and optimised decision tree operate over this trained set producing classification rules with improved coverage, classification capability and generalisation ability. Experiment with the Knowledge Discovery and Data mining (KDD) data set which have information on traffic pattern, during normal and intrusive behaviour shows that the proposed algorithm produces optimised decision rules and outperforms other machine-learning algorithm.
Non-intrusive head movement analysis of videotaped seizures of epileptic origin.
Mandal, Bappaditya; Eng, How-Lung; Lu, Haiping; Chan, Derrick W S; Ng, Yen-Ling
2012-01-01
In this work we propose a non-intrusive video analytic system for patient's body parts movement analysis in Epilepsy Monitoring Unit. The system utilizes skin color modeling, head/face pose template matching and face detection to analyze and quantify the head movements. Epileptic patients' heads are analyzed holistically to infer seizure and normal random movements. The patient does not require to wear any special clothing, markers or sensors, hence it is totally non-intrusive. The user initializes the person-specific skin color and selects few face/head poses in the initial few frames. The system then tracks the head/face and extracts spatio-temporal features. Support vector machines are then used on these features to classify seizure-like movements from normal random movements. Experiments are performed on numerous long hour video sequences captured in an Epilepsy Monitoring Unit at a local hospital. The results demonstrate the feasibility of the proposed system in pediatric epilepsy monitoring and seizure detection.
Sakamoto, Kotaro; Ishibashi, Yoshihiro; Adachi, Ryutaro; Matsumoto, Shin-Ichi; Oki, Hideyuki; Kamada, Yusuke; Sogabe, Satoshi; Zama, Yumi; Sakamoto, Jun-Ichi; Tani, Akiyoshi
2017-08-01
Cytidine triphosphate synthase 1 (CTPS1) is an enzyme expressed in activated lymphocytes that catalyzes the conversion of uridine triphosphate (UTP) to cytidine triphosphate (CTP) with ATP-dependent amination, using either L-glutamine or ammonia as the nitrogen source. Since CTP plays an important role in DNA/RNA synthesis, phospholipid synthesis, and protein sialyation, CTPS1-inhibition is expected to control lymphocyte proliferation and size expansion in inflammatory diseases. In contrast, CTPS2, an isozyme of CTPS1 possessing 74% amino acid sequence homology, is expressed in normal lymphocytes. Thus, CTPS1-selective inhibition is important to avoid undesirable side effects. Here, we report the discovery of CTpep-3: Ac-FRLGLLKAFRRLF-OH from random peptide libraries displayed on T7 phage, which exhibited CTPS1-selective binding with a K D value of 210nM in SPR analysis and CTPS1-selective inhibition with an IC 50 value of 110nM in the enzyme assay. Furthermore, two fundamentally different approaches, enzyme inhibition assay and HDX-MS, provided the same conclusion that CTpep-3 acts by binding to the amidoligase (ALase) domain on CTPS1. To our knowledge, CTpep-3 is the first CTPS1-selective inhibitor. Copyright © 2017 Elsevier Inc. All rights reserved.
An Alternative Method for Computing Mean and Covariance Matrix of Some Multivariate Distributions
ERIC Educational Resources Information Center
Radhakrishnan, R.; Choudhury, Askar
2009-01-01
Computing the mean and covariance matrix of some multivariate distributions, in particular, multivariate normal distribution and Wishart distribution are considered in this article. It involves a matrix transformation of the normal random vector into a random vector whose components are independent normal random variables, and then integrating…
Confidence regions of planar cardiac vectors
NASA Technical Reports Server (NTRS)
Dubin, S.; Herr, A.; Hunt, P.
1980-01-01
A method for plotting the confidence regions of vectorial data obtained in electrocardiology is presented. The 90%, 95% and 99% confidence regions of cardiac vectors represented in a plane are obtained in the form of an ellipse centered at coordinates corresponding to the means of a sample selected at random from a bivariate normal distribution. An example of such a plot for the frontal plane QRS mean electrical axis for 80 horses is also presented.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
A random effects meta-analysis model with Box-Cox transformation.
Yamaguchi, Yusuke; Maruo, Kazushi; Partlett, Christopher; Riley, Richard D
2017-07-19
In a random effects meta-analysis model, true treatment effects for each study are routinely assumed to follow a normal distribution. However, normality is a restrictive assumption and the misspecification of the random effects distribution may result in a misleading estimate of overall mean for the treatment effect, an inappropriate quantification of heterogeneity across studies and a wrongly symmetric prediction interval. We focus on problems caused by an inappropriate normality assumption of the random effects distribution, and propose a novel random effects meta-analysis model where a Box-Cox transformation is applied to the observed treatment effect estimates. The proposed model aims to normalise an overall distribution of observed treatment effect estimates, which is sum of the within-study sampling distributions and the random effects distribution. When sampling distributions are approximately normal, non-normality in the overall distribution will be mainly due to the random effects distribution, especially when the between-study variation is large relative to the within-study variation. The Box-Cox transformation addresses this flexibly according to the observed departure from normality. We use a Bayesian approach for estimating parameters in the proposed model, and suggest summarising the meta-analysis results by an overall median, an interquartile range and a prediction interval. The model can be applied for any kind of variables once the treatment effect estimate is defined from the variable. A simulation study suggested that when the overall distribution of treatment effect estimates are skewed, the overall mean and conventional I 2 from the normal random effects model could be inappropriate summaries, and the proposed model helped reduce this issue. We illustrated the proposed model using two examples, which revealed some important differences on summary results, heterogeneity measures and prediction intervals from the normal random effects model. The random effects meta-analysis with the Box-Cox transformation may be an important tool for examining robustness of traditional meta-analysis results against skewness on the observed treatment effect estimates. Further critical evaluation of the method is needed.
NASA Technical Reports Server (NTRS)
Merchant, D. H.; Gates, R. M.; Straayer, J. W.
1975-01-01
The effect of localized structural damping on the excitability of higher-order large space telescope spacecraft modes is investigated. A preprocessor computer program is developed to incorporate Voigt structural joint damping models in a finite-element dynamic model. A postprocessor computer program is developed to select critical modes for low-frequency attitude control problems and for higher-frequency fine-stabilization problems. The selection is accomplished by ranking the flexible modes based on coefficients for rate gyro, position gyro, and optical sensor, and on image-plane motions due to sinusoidal or random PSD force and torque inputs.
Subtraction of cap-trapped full-length cDNA libraries to select rare transcripts.
Hirozane-Kishikawa, Tomoko; Shiraki, Toshiyuki; Waki, Kazunori; Nakamura, Mari; Arakawa, Takahiro; Kawai, Jun; Fagiolini, Michela; Hensch, Takao K; Hayashizaki, Yoshihide; Carninci, Piero
2003-09-01
The normalization and subtraction of highly expressed cDNAs from relatively large tissues before cloning dramatically enhanced the gene discovery by sequencing for the mouse full-length cDNA encyclopedia, but these methods have not been suitable for limited RNA materials. To normalize and subtract full-length cDNA libraries derived from limited quantities of total RNA, here we report a method to subtract plasmid libraries excised from size-unbiased amplified lambda phage cDNA libraries that avoids heavily biasing steps such as PCR and plasmid library amplification. The proportion of full-length cDNAs and the gene discovery rate are high, and library diversity can be validated by in silico randomization.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harner, E.J.; Gilfillan, E.S.
Two large shoreline assessment studies conducted in 1990 in Prince William Sound, Alaska, after the Exxon Valdez oil spill used different design strategies to determine the impact of oiling on shoreline biota. One of the studies, the Coastal Habitat Injury Assessment (CHIA) conducted for the Exxon Valdez Oil Spill Council, used matched pairs of sites, normal population distributions for biota, and meta-analysis. The power of the CHIA study to detect oiling impacts depends on being able to identify and select appropriate pairs of sites for comparison. The CHIA study also increased the oiling signal by focusing on moderate to heavilymore » oiled sites. The Shoreline Ecology Program (SEP), conducted for Exxon, used a stratified-random-sampling study design, normal and non-normal population distributions and covariates. The SEP study was able to detect oiling impacts by using a sufficient number of sites and widely spaced transects.« less
Listeners modulate temporally selective attention during natural speech processing
Astheimer, Lori B.; Sanders, Lisa D.
2009-01-01
Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316
Abend, M; Pfeiffer, R M; Ruf, C; Hatch, M; Bogdanova, T I; Tronko, M D; Hartmann, J; Meineke, V; Mabuchi, K; Brenner, A V
2013-10-15
A strong, consistent association between childhood irradiation and subsequent thyroid cancer provides an excellent model for studying radiation carcinogenesis. We evaluated gene expression in 63 paired RNA specimens from frozen normal and tumour thyroid tissues with individual iodine-131 (I-131) doses (0.008-8.6 Gy, no unirradiated controls) received from Chernobyl fallout during childhood (Ukrainian-American cohort). Approximately half of these randomly selected samples (32 tumour/normal tissue RNA specimens) were hybridised on 64 whole-genome microarrays (Agilent, 4 × 44 K). Associations between I-131 dose and gene expression were assessed separately in normal and tumour tissues using Kruskal-Wallis and linear trend tests. Of 155 genes significantly associated with I-131 after Bonferroni correction and with ≥2-fold increase per dose category, we selected 95 genes. On the remaining 31 RNA samples these genes were used for validation purposes using qRT-PCR. Expression of eight genes (ABCC3, C1orf9, C6orf62, FGFR1OP2, HEY2, NDOR1, STAT3, and UCP3) in normal tissue and six genes (ANKRD46, CD47, HNRNPH1, NDOR1, SCEL, and SERPINA1) in tumour tissue was significantly associated with I-131. PANTHER/DAVID pathway analyses demonstrated significant over-representation of genes coding for nucleic acid binding in normal and tumour tissues, and for p53, EGF, and FGF signalling pathways in tumour tissue. The multistep process of radiation carcinogenesis begins in histologically normal thyroid tissue and may involve dose-dependent gene expression changes.
Sandilands, Euan A; Cameron, Sharon; Paterson, Frances; Donaldson, Sam; Briody, Lesley; Crowe, Jane; Donnelly, Julie; Thompson, Adrian; Johnston, Neil R; Mackenzie, Ivor; Uren, Neal; Goddard, Jane; Webb, David J; Megson, Ian L; Bateman, Nicholas; Eddleston, Michael
2012-02-03
Contrast-induced nephropathy is a common complication of contrast administration in patients with chronic kidney disease and diabetes. Its pathophysiology is not well understood; similarly the role of intravenous or oral acetylcysteine is unclear. Randomized controlled trials to date have been conducted without detailed knowledge of the effect of acetylcysteine on renal function. We are conducting a detailed mechanistic study of acetylcysteine on normal and impaired kidneys, both with and without contrast. This information would guide the choice of dose, route, and appropriate outcome measure for future clinical trials in patients with chronic kidney disease. We designed a 4-part study. We have set up randomised controlled cross-over studies to assess the effect of intravenous (50 mg/kg/hr for 2 hrs before contrast exposure, then 20 mg/kg/hr for 5 hrs) or oral acetylcysteine (1200 mg twice daily for 2 days, starting the day before contrast exposure) on renal function in normal and diseased kidneys, and normal kidneys exposed to contrast. We have also set up a parallel-group randomized controlled trial to assess the effect of intravenous or oral acetylcysteine on patients with chronic kidney disease stage III undergoing elective coronary angiography. The primary outcome is change in renal blood flow; secondary outcomes include change in glomerular filtration rate, tubular function, urinary proteins, and oxidative balance. Contrast-induced nephropathy represents a significant source of hospital morbidity and mortality. Over the last ten years, acetylcysteine has been administered prior to contrast to reduce the risk of contrast-induced nephropathy. Randomized controlled trials, however, have not reliably demonstrated renoprotection; a recent large randomized controlled trial assessing a dose of oral acetylcysteine selected without mechanistic insight did not reduce the incidence of contrast-induced nephropathy. Our study should reveal the mechanism of effect of acetylcysteine on renal function and identify an appropriate route for future dose response studies and in time randomized controlled trials. Clinical Trials.gov: NCT00558142; EudraCT: 2006-003509-18.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, A; Mohan, R; Liao, Z
Purpose: The aim of this work is to compare the “irradiated volume” (IRV) of normal tissues receiving 5, 20, 50, 80 and 90% or higher of the prescription dose with passively scattered proton therapy (PSPT) vs. IMRT of lung cancer patients. The overall goal of this research is to understand the factors affecting outcomes of a randomized PSPT vs. IMRT lung trial. Methods: Thirteen lung cancer patients, selected randomly, were analyzed. Each patient had PSPT and IMRT 74 Gy (RBE) plans meeting the same normal tissue constraints generated. IRVs were created for pairs of IMRT and PSPT plans on eachmore » patient. The volume of iGTV, (respiratory motion-incorporated GTV) was subtracted from each IRV to create normal tissue irradiated volume IRVNT. The average of IRVNT DVHs over all patients was also calculated for both modalities and inter-compared as were the selected dose-volume indices. Probability (p value) curves were calculated based on the Wilcoxon matched-paired signed-rank test to determine the dose regions where the statistically significant differences existed. Results: As expected, the average 5, 20 and 50% IRVNT’s for PSPT was found to be significantly smaller than for IMRT (p < 0.001, 0.01, and 0.001 respectively). However, the average 90% IRVNT for PSPT was greater than for IMRT (p = 0.003) presumably due to larger penumbra of protons and the long range of protons in lower density media. The 80% IRVNT for PSPT was also larger but not statistically distinguishable (p = .224). Conclusion: PSPT modality has smaller irradiated volume at lower doses, but larger volume at high doses. A larger cohort of lung patients will be analyzed in the future and IRVNT of patients treated with PSPT and IMRT will be compared to determine if the irradiated volumes (the magnitude of “dose bath”) correlate with outcomes.« less
Registration algorithm of point clouds based on multiscale normal features
NASA Astrophysics Data System (ADS)
Lu, Jun; Peng, Zhongtao; Su, Hang; Xia, GuiHua
2015-01-01
The point cloud registration technology for obtaining a three-dimensional digital model is widely applied in many areas. To improve the accuracy and speed of point cloud registration, a registration method based on multiscale normal vectors is proposed. The proposed registration method mainly includes three parts: the selection of key points, the calculation of feature descriptors, and the determining and optimization of correspondences. First, key points are selected from the point cloud based on the changes of magnitude of multiscale curvatures obtained by using principal components analysis. Then the feature descriptor of each key point is proposed, which consists of 21 elements based on multiscale normal vectors and curvatures. The correspondences in a pair of two point clouds are determined according to the descriptor's similarity of key points in the source point cloud and target point cloud. Correspondences are optimized by using a random sampling consistency algorithm and clustering technology. Finally, singular value decomposition is applied to optimized correspondences so that the rigid transformation matrix between two point clouds is obtained. Experimental results show that the proposed point cloud registration algorithm has a faster calculation speed, higher registration accuracy, and better antinoise performance.
Dysmegakaryocytopoiesis and maintaining platelet count in patients with plasma cell neoplasm.
Mair, Yasmin; Zheng, Yan; Cai, Donghong
2013-05-01
Dysmegakaryocytopoiesis in patients with the plasma cell neoplasm (PCN) is rarely discussed in the literature. The puzzling phenomenon, which PCN patients maintaining normal platelet count even when the marrow is mostly replaced by plasma cells, is hardly explored. This study was aimed to determine the frequency of dysmegakaryocytopoiesis in PCN and the relationships between bone marrow (BM) plasma cell percentage, plasma cell immunomarkers, the severity of dysmegakaryocytopoiesis, and peripheral blood platelet count in PCN. We randomly selected 16 cases of PCN, among which 4 were with monoclonal gammopathy of undetermined significance and 12 were with plasma cell myeloma. OUR STUDY SHOWED THAT: (1) Dysmegakaryocytopoiesis was present in all the selected cases of PCN and its severity was not correlated with the percentage of the plasma cells in BM; (2) almost all patients maintained normal platelet count even when BM was mostly replaced by plasma cells; (3) immunomarkers of the neoplastic plasma cells were not associated with dysmegakaryocytopoiesis or maintaining of platelet count. The possible mechanisms behind dysmegakaryocytopoiesis and maintaining of platelet count were also discussed. Despite the universal presence of dysmegakaryocytopoiesis in PCN, the platelet count is maintained at normal range.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
Kerschner, Joseph E; Erdos, Geza; Hu, Fen Ze; Burrows, Amy; Cioffi, Joseph; Khampang, Pawjai; Dahlgren, Margaret; Hayes, Jay; Keefe, Randy; Janto, Benjamin; Post, J Christopher; Ehrlich, Garth D
2010-04-01
We sought to construct and partially characterize complementary DNA (cDNA) libraries prepared from the middle ear mucosa (MEM) of chinchillas to better understand pathogenic aspects of infection and inflammation, particularly with respect to leukotriene biogenesis and response. Chinchilla MEM was harvested from controls and after middle ear inoculation with nontypeable Haemophilus influenzae. RNA was extracted to generate cDNA libraries. Randomly selected clones were subjected to sequence analysis to characterize the libraries and to provide DNA sequence for phylogenetic analyses. Reverse transcription-polymerase chain reaction of the RNA pools was used to generate cDNA sequences corresponding to genes associated with leukotriene biosynthesis and metabolism. Sequence analysis of 921 randomly selected clones from the uninfected MEM cDNA library produced approximately 250,000 nucleotides of almost entirely novel sequence data. Searches of the GenBank database with the Basic Local Alignment Search Tool provided for identification of 515 unique genes expressed in the MEM and not previously described in chinchillas. In almost all cases, the chinchilla cDNA sequences displayed much greater homology to human or other primate genes than with rodent species. Genes associated with leukotriene metabolism were present in both normal and infected MEM. Based on both phylogenetic comparisons and gene expression similarities with humans, chinchilla MEM appears to be an excellent model for the study of middle ear inflammation and infection. The higher degree of sequence similarity between chinchillas and humans compared to chinchillas and rodents was unexpected. The cDNA libraries from normal and infected chinchilla MEM will serve as useful molecular tools in the study of otitis media and should yield important information with respect to middle ear pathogenesis.
Kerschner, Joseph E.; Erdos, Geza; Hu, Fen Ze; Burrows, Amy; Cioffi, Joseph; Khampang, Pawjai; Dahlgren, Margaret; Hayes, Jay; Keefe, Randy; Janto, Benjamin; Post, J. Christopher; Ehrlich, Garth D.
2010-01-01
Objectives We sought to construct and partially characterize complementary DNA (cDNA) libraries prepared from the middle ear mucosa (MEM) of chinchillas to better understand pathogenic aspects of infection and inflammation, particularly with respect to leukotriene biogenesis and response. Methods Chinchilla MEM was harvested from controls and after middle ear inoculation with nontypeable Haemophilus influenzae. RNA was extracted to generate cDNA libraries. Randomly selected clones were subjected to sequence analysis to characterize the libraries and to provide DNA sequence for phylogenetic analyses. Reverse transcription–polymerase chain reaction of the RNA pools was used to generate cDNA sequences corresponding to genes associated with leukotriene biosynthesis and metabolism. Results Sequence analysis of 921 randomly selected clones from the uninfected MEM cDNA library produced approximately 250,000 nucleotides of almost entirely novel sequence data. Searches of the GenBank database with the Basic Local Alignment Search Tool provided for identification of 515 unique genes expressed in the MEM and not previously described in chinchillas. In almost all cases, the chinchilla cDNA sequences displayed much greater homology to human or other primate genes than with rodent species. Genes associated with leukotriene metabolism were present in both normal and infected MEM. Conclusions Based on both phylogenetic comparisons and gene expression similarities with humans, chinchilla MEM appears to be an excellent model for the study of middle ear inflammation and infection. The higher degree of sequence similarity between chinchillas and humans compared to chinchillas and rodents was unexpected. The cDNA libraries from normal and infected chinchilla MEM will serve as useful molecular tools in the study of otitis media and should yield important information with respect to middle ear pathogenesis. PMID:20433028
Sexual self-esteem in mothers of normal and mentally-retarded children.
Tavakolizadeh, Jahanshir; Amiri, Mostafa; Nejad, Fahimeh Rastgoo
2017-06-01
Sexual self-esteem is negatively influenced by the stressful experiences in lifetime. This study compared the sexual self-esteem and its components in mothers with normal and mentally-retarded children in Qaen city, in 2014. A total of 120 mothers were selected and assigned into two groups of 60 samples based on convenient sampling method and randomized multiple stage sampling. Both groups completed sexual self-esteem questionnaire. The data were analyzed employing t-test through SPSS software version15. The results showed that the rate of sexual self-esteem in mothers of mentally-retarded children decreased significantly compared with that of mothers with normal children (p<0.05). Moreover, the mean scores of all components of sexual self-esteem including skill and experience, attractiveness, control, moral judgment, and adaptiveness in mothers of mentally-retarded children were significantly less than those of mothers with normal children (p <0.05). Therefore, it is recommended that self-esteem, especially the sexual one, be taught to mothers of mentally-retarded children by specialists.
Qin, Li-Xuan; Levine, Douglas A
2016-06-10
Accurate discovery of molecular biomarkers that are prognostic of a clinical outcome is an important yet challenging task, partly due to the combination of the typically weak genomic signal for a clinical outcome and the frequently strong noise due to microarray handling effects. Effective strategies to resolve this challenge are in dire need. We set out to assess the use of careful study design and data normalization for the discovery of prognostic molecular biomarkers. Taking progression free survival in advanced serous ovarian cancer as an example, we conducted empirical analysis on two sets of microRNA arrays for the same set of tumor samples: arrays in one set were collected using careful study design (that is, uniform handling and randomized array-to-sample assignment) and arrays in the other set were not. We found that (1) handling effects can confound the clinical outcome under study as a result of chance even with randomization, (2) the level of confounding handling effects can be reduced by data normalization, and (3) good study design cannot be replaced by post-hoc normalization. In addition, we provided a practical approach to define positive and negative control markers for detecting handling effects and assessing the performance of a normalization method. Our work showcased the difficulty of finding prognostic biomarkers for a clinical outcome of weak genomic signals, illustrated the benefits of careful study design and data normalization, and provided a practical approach to identify handling effects and select a beneficial normalization method. Our work calls for careful study design and data analysis for the discovery of robust and translatable molecular biomarkers.
Hu, Ping; Wu, Tingting; Zhang, Fan; Zhang, Yan; Lu, Lu; Zeng, Huan; Sharma, Manoj; Xun, Lei; Zhao, Yong
2017-01-01
(1) Objective: We aimed to explore the current situation of eating out and the association with socio-demographic factors of university students in Chongqing, China. (2) Methods: We used self-administered questionnaires to collect information. There are 14 universities in Chongqing; four (Chongqing Medical University, Chongqing University, Chongqing Normal University, and Chongqing University of Science & Technology) were randomly selected. In each selected university, two disciplines were randomly selected. (3) Results: 4595 university students participated in the study. The frequency of eating out was relatively high. The frequency of eating out among females was higher than that among males during weekdays. The two main reasons for eating out were having an opportunity to meet friends (56.0%) and improving diet (39.6%). Bistros (61.7%) and hot-pot restaurants (41.1%) were the favorite places for eating out. Only 36.0% of the participants said they considered nutrition and food safety when selecting restaurants. The majority of the participants demonstrated a high demand for nutrition and food safety knowledge when eating out (77.7%). (4) Conclusions: The higher the monthly living expenses were, the higher the frequency of eating out was. An intervention strategy to reduce the frequency or change the behavior of eating out should be formulated by considering the students’ perspectives. PMID:29084159
Hu, Ping; Wu, Tingting; Zhang, Fan; Zhang, Yan; Lu, Lu; Zeng, Huan; Shi, Zu-Min; Sharma, Manoj; Xun, Lei; Zhao, Yong
2017-10-30
(1) Objective : We aimed to explore the current situation of eating out and the association with socio-demographic factors of university students in Chongqing, China. (2) Methods : We used self-administered questionnaires to collect information. There are 14 universities in Chongqing; four (Chongqing Medical University, Chongqing University, Chongqing Normal University, and Chongqing University of Science & Technology) were randomly selected. In each selected university, two disciplines were randomly selected. (3) Results : 4595 university students participated in the study. The frequency of eating out was relatively high. The frequency of eating out among females was higher than that among males during weekdays. The two main reasons for eating out were having an opportunity to meet friends (56.0%) and improving diet (39.6%). Bistros (61.7%) and hot-pot restaurants (41.1%) were the favorite places for eating out. Only 36.0% of the participants said they considered nutrition and food safety when selecting restaurants. The majority of the participants demonstrated a high demand for nutrition and food safety knowledge when eating out (77.7%). (4) Conclusions : The higher the monthly living expenses were, the higher the frequency of eating out was. An intervention strategy to reduce the frequency or change the behavior of eating out should be formulated by considering the students' perspectives.
Complex patterns of abnormal heartbeats
NASA Technical Reports Server (NTRS)
Schulte-Frohlinde, Verena; Ashkenazy, Yosef; Goldberger, Ary L.; Ivanov, Plamen Ch; Costa, Madalena; Morley-Davies, Adrian; Stanley, H. Eugene; Glass, Leon
2002-01-01
Individuals having frequent abnormal heartbeats interspersed with normal heartbeats may be at an increased risk of sudden cardiac death. However, mechanistic understanding of such cardiac arrhythmias is limited. We present a visual and qualitative method to display statistical properties of abnormal heartbeats. We introduce dynamical "heartprints" which reveal characteristic patterns in long clinical records encompassing approximately 10(5) heartbeats and may provide information about underlying mechanisms. We test if these dynamics can be reproduced by model simulations in which abnormal heartbeats are generated (i) randomly, (ii) at a fixed time interval following a preceding normal heartbeat, or (iii) by an independent oscillator that may or may not interact with the normal heartbeat. We compare the results of these three models and test their limitations to comprehensively simulate the statistical features of selected clinical records. This work introduces methods that can be used to test mathematical models of arrhythmogenesis and to develop a new understanding of underlying electrophysiologic mechanisms of cardiac arrhythmia.
Bucky Paper as a Support Membrane in Retinal Cell Transplantation
NASA Technical Reports Server (NTRS)
Loftus, David J. (Inventor); Leng, Theodore (Inventor); Huie, Philip (Inventor); Fishman, Harvey (Inventor)
2006-01-01
A method for repairing a retinal system of an eye, using bucky paper on which a plurality of retina pigment epithelial cells and/or iris pigment epithelial cells and/or stem cells is deposited, either randomly or in a selected cell pattern. The cell-covered bucky paper is positioned in a sub-retinal space to transfer cells to this space and thereby restore the retina to its normal functioning, where retinal damage or degeneration, such as macular degeneration, has occurred.
Aksu, T A; Esen, F; Dolunay, M S; Alicigüzel, Y; Yücel, G; Cali, S; Baykal, Y
1990-06-01
Glucose-6-phosphate dehydrogenase (1.1.1.49) activity was assessed in 1986-1988 in blood samples from 1,521 individuals from 375 families living an Antalya city and adjacent villages by Beutler's fluorescence spot test. The families were randomly selected by the State Statistical Institute. Complete deficiency occurred in 7.4% of males and 1.8% of females. Mean enzyme activity was 6.77 +/- 1.07 IU/g Hb in normals and ranged between 0 and 0.48 IU/g Hb in those considered deficient. Kinetic measurements made with partially purified enzyme showed that GdB+ and GdB- variants were present in normal and in deficient subjects, respectively.
Hypertension 6 weeks post partum in apparently normal women. A reappraisal and challenge.
Piver, M S; Corson, S L; Bolognese, R J
1967-08-01
Hypertensive blood pressure readings were recorded in 282 women from a group of 1025 consecutive patients returning for their sixth-week postpartum visit. Of the 282 women, 120 were classified as toxemic; however, 162 (58%) had had no elevation of blood pressure antepartum, intrapartum, or during the immediate postpartum period. In addition, 100 women with normal sixth-week postpartum blood pressures were randomly selected as controls. A retrospective analysis of their records showed sixth-week postpartum hypertension to occur much less frequently (13%) in nulliparous women, as compared to the preeclamptic nulliparous patients (31%). With the exception of proteinuria, all of the other data studied failed to reveal any significant abnormalities in the late postpartum hypertensive group of patients.
Yang, Lanlin; Cai, Sufen; Zhang, Shuoping; Kong, Xiangyi; Gu, Yifan; Lu, Changfu; Dai, Jing; Gong, Fei; Lu, Guangxiu; Lin, Ge
2018-05-01
Does single cleavage-stage (Day 3) embryo transfer using a time-lapse (TL) hierarchical classification model achieve comparable ongoing pregnancy rates (OPR) to single blastocyst (Day 5) transfer by conventional morphological (CM) selection? Day 3 single embryo transfer (SET) with a hierarchical classification model had a significantly lower OPR compared with Day 5 SET with CM selection. Cleavage-stage SET is an alternative to blastocyst SET. Time-lapse imaging assists better embryo selection, based on studies of pregnancy outcomes when adding time-lapse imaging to CM selection at the cleavage or blastocyst stage. This single-centre, randomized, open-label, active-controlled, non-inferiority study included 600 women between October 2015 and April 2017. Eligible patients were Chinese females, aged ≤36 years, who were undergoing their first or second fresh IVF cycle using their own oocytes, and who had FSH levels ≤12 IU/mL on Day 3 of the cycle and 10 or more oocytes retrieved. Patients who had underlying uterine conditions, oocyte donation, recurrent pregnancy loss, abnormal oocytes or <6 normally fertilized embryos (2PN) were excluded from the study participation. Patients were randomized 1:1 to either the cleavage-stage SET with a time-lapse hierarchical classification model for selection (D3 + TL) or blastocyst SET with CM selection (D5 + CM). All normally fertilized zygotes were cultured in Primo Vision. The study was conducted at a tertiary IVF centre (CITIC-Xiangya) and OPR was the primary outcome. A total of 600 patients were randomized to the two groups, among which 585 (D3 + TL = 290, D5 + CM = 295) were included in the Modified-intention-to-treat (mITT) population and 517 (D3 + TL = 261, D5 + CM = 256) were included in the PP population. In the per protocol (PP) population, OPR was significantly lower in the D3 group (59.4%, 155/261) than in the D5 group (68.4%, 175/256) (difference: -9.0%, 95% CI: -17.1%, -0.7%, P = 0.03). Analysis in mITT population showed a marginally significant difference in the OPR between the D3 + TL and D5 + CM groups (56.6 versus 64.1%, difference: -7.5%, 95% CI: -15.4%, 0.4%, P = 0.06). The D3 + TL group resulted in a markedly lower implantation rate than the D5 + CM group (64.4 versus 77.0%; P = 0.002) in the PP analysis, however, the early miscarriage rate did not significantly differ between the two groups. The study lacked a direct comparison between time-lapse and CM selections at cleavage-stage SET and was statistically underpowered to detect non-inferiority. The subject's eligibility criteria favouring women with a good prognosis for IVF weakened the generalizability of the results. The OPR from Day 3 cleavage-stage SET using hierarchical classification time-lapse selection was significantly lower compared with that from Day 5 blastocyst SET using conventional morphology, yet it appeared to be clinically acceptable in women underwent IVF. This study is supported by grants from Ferring Pharmaceuticals and the Program for New Century Excellent Talents in University, China. ChiCTR-ICR-15006600. 16 June 2015. 1 October 2015.
Lima, Robson B DE; Bufalino, Lina; Alves, Francisco T; Silva, José A A DA; Ferreira, Rinaldo L C
2017-01-01
Currently, there is a lack of studies on the correct utilization of continuous distributions for dry tropical forests. Therefore, this work aims to investigate the diameter structure of a brazilian tropical dry forest and to select suitable continuous distributions by means of statistic tools for the stand and the main species. Two subsets were randomly selected from 40 plots. Diameter at base height was obtained. The following functions were tested: log-normal; gamma; Weibull 2P and Burr. The best fits were selected by Akaike's information validation criterion. Overall, the diameter distribution of the dry tropical forest was better described by negative exponential curves and positive skewness. The forest studied showed diameter distributions with decreasing probability for larger trees. This behavior was observed for both the main species and the stand. The generalization of the function fitted for the main species show that the development of individual models is needed. The Burr function showed good flexibility to describe the diameter structure of the stand and the behavior of Mimosa ophthalmocentra and Bauhinia cheilantha species. For Poincianella bracteosa, Aspidosperma pyrifolium and Myracrodum urundeuva better fitting was obtained with the log-normal function.
Zhu, Shufeng; Wong, Lena L N; Wang, Bin; Chen, Fei
2017-07-12
The aim of the present study was to evaluate the influence of lexical tone contour and age on sentence perception in quiet and in noise conditions in Mandarin-speaking children ages 7 to 11 years with normal hearing. Test materials were synthesized Mandarin sentences, each word with a manipulated lexical contour, that is, normal contour, flat contour, or a tone contour randomly selected from the four Mandarin lexical tone contours. A convenience sample of 75 Mandarin-speaking participants with normal hearing, ages 7, 9, and 11 years (25 participants in each age group), was selected. Participants were asked to repeat the synthesized speech in quiet and in speech spectrum-shaped noise at 0 dB signal-to-noise ratio. In quiet, sentence recognition by the 11-year-old children was similar to that of adults, and misrepresented lexical tone contours did not have a detrimental effect. However, the performance of children ages 9 and 7 years was significantly poorer. The performance of all three age groups, especially the younger children, declined significantly in noise. The present research suggests that lexical tone contour plays an important role in Mandarin sentence recognition, and misrepresented tone contours result in greater difficulty in sentence recognition in younger children. These results imply that maturation and/or language use experience play a role in the processing of tone contours for Mandarin speech understanding, particularly in noise.
Gutnik, Lily; Lee, Clara; Msosa, Vanessa; Moses, Agnes; Stanley, Christopher; Mzumara, Suzgo; Liomba, N George; Gopal, Satish
2016-07-01
Breast cancer awareness and early detection are limited in sub-Saharan Africa. Resource limitations make screening mammography or clinical breast examination (CBE) by physicians or nurses impractical in many settings. We aimed to assess feasibility and performance of CBE by laywomen in urban health clinics in Malawi. Four laywomen were trained to deliver breast cancer educational talks and conduct CBE. After training, screening was implemented in diverse urban health clinics. Eligible women were ≥30 y, with no prior breast cancer or breast surgery, and clinic attendance for reasons other than a breast concern. Women with abnormal CBE were referred to a study surgeon. All palpable masses confirmed by surgeon examination were pathologically sampled. Patients with abnormal screening CBE but normal surgeon examination underwent breast ultrasound confirmation. In addition, 50 randomly selected women with normal screening CBE underwent breast ultrasound, and 45 different women with normal CBE were randomly assigned to surgeon examination. Among 1220 eligible women, 1000 (82%) agreed to CBE. Lack of time (69%) was the commonest reason for refusal. Educational talk attendance was associated with higher CBE participation (83% versus 77%, P = 0.012). Among 1000 women screened, 7% had abnormal CBE. Of 45 women with normal CBE randomized to physician examination, 43 had normal examinations and two had axillary lymphadenopathy not detected by CBE. Sixty of 67 women (90%) with abnormal CBE attended the referral visit. Of these, 29 (48%) had concordant abnormal physician examination. Thirty-one women (52%) had discordant normal physician examination, all of whom also had normal breast ultrasounds. Compared with physician examination, sensitivity for CBE by laywomen was 94% (confidence interval [CI] 79%-99%), specificity 58% (CI, 46%-70%), positive predictive value 48% (CI, 35%-62%), and negative predictive value 96% (CI, 85%-100%). Of 13 women who underwent recommended pathologic sampling of a breast lesion, two had cytologic dysplasia and all others benign results. CBE uptake in Lilongwe clinics was high. CBE by laywomen compared favorably with physician examination and follow-up was good. Our intervention can serve as a model for wider implementation. Performance in rural areas, effects on cancer stage and mortality, and cost effectiveness require evaluation. Copyright © 2016 Elsevier Inc. All rights reserved.
Clinical breast exam screening by trained laywomen in Malawi integrated with other health services
Gutnik, Lily; Lee, Clara; Msosa, Vanessa; Moses, Agnes; Stanley, Christopher; Mzumara, Suzgo; Liomba, N. George; Gopal, Satish
2016-01-01
Background Breast cancer awareness and early detection are limited in sub-Saharan Africa. Resource limitations make screening mammography or clinical breast exam (CBE) by physicians or nurses impractical in many settings. We aimed to assess feasibility and performance of CBE by laywomen in urban health clinics in Malawi. Methods Four laywomen were trained to deliver breast cancer educational talksand conduct CBE. After training, screening was implemented in diverse urbanhealth clinics. Eligible women were ≥30 years, with no prior breast cancer or breast surgery, and clinic attendance for reasons other than abreast concern. Wo men with abnormal CBE were referred to a study surgeon. All palpable masses confirmed by surgeon exam were pathologically sampled. Patients with abnormal screening CBE but normal surgeon exam underwentbreast ultrasound con firmation. Additionally, 50 randomly selected women with normal screening CBE underwent breast ultrasound, and 45 different women with normal CBE were randomly assigned to surgeon exam. Results Among 1,220 eligible women, 1,000 (82%) agreed to CBE. Lack of time (69%) was the commonest reason for refusal. Educational talk attendance was associated with higher CBE participation (83% vs 77%, p=0.012). Among 1,000 women screened, 7% had abnormal CBE. Of 45 women with normal CBE randomized to physician exam, 43 had normal exams and two had axillary lymphadenopathy not detected by CBE. Sixty of 67 women (90%) with abnormal CBE attended the referral visit. Of these, 29 (48%) had concordant abnormal physician exam. Thirty-one women (52%) had discordant normal physician exam, all of whom also had normal breast ultrasounds. Compared to physician exam, sensitivity for CBE by laywomen was 94% (CI 79-99%), specificity 58% (CI 46-70%), positive predictive value 48% (CI 35-62%), and negative predictive value 96% (CI 85-100%). Of 13 women who underwent recommended pathologic sampling of a breast lesion, two had cytologic dysplasia and all others benign results. Conclusions and relevance CBE uptake in Lilongwe clinics was high. CBE by laywomen compared favorably with physician exam, and follow-up was good. Our intervention can serve as a model for wider implementation. Performance in rural areas, effects on cancer stage and mortality, and cost-effectiveness require evaluation. PMID:27451869
Proteome Analysis of Thyroid Cancer Cells After Long-Term Exposure to a Random Positioning Machine
NASA Astrophysics Data System (ADS)
Pietsch, Jessica; Bauer, Johann; Weber, Gerhard; Nissum, Mikkel; Westphal, Kriss; Egli, Marcel; Grosse, Jirka; Schönberger, Johann; Eilles, Christoph; Infanger, Manfred; Grimm, Daniela
2011-11-01
Annulling gravity during cell culturing triggers various types of cells to change their protein expression in a time dependent manner. We therefore decided to determine gravity sensitive proteins and their period of sensitivity to the effects of gravity. In this study, thyroid cancer cells of the ML-1 cell line were cultured under normal gravity (1 g) or in a random positioning machine (RPM), which simulated near weightlessness for 7 and 11 days. Cells were then sonicated and proteins released into the supernatant were separated from those that remained attached to the cell fragments. Subsequently, both types of proteins were fractionated by free-flow isoelectric focussing (FF-IEF). The fractions obtained were further separated by sodium dodecyl sulphate polyacrylamide gel electrophoresis (SDS-PAGE) to which comparable FF-IEF fractions derived from cells cultured either under 1 g or on the RPM had been applied side by side. The separation resulted in pairs of lanes, on which a number of identical bands were observed. Selected gel pieces were excised and their proteins determined by mass spectrometry. Equal proteins from cells cultured under normal gravity and the RPM, respectively, were detected in comparable gel pieces. However, many of these proteins had received different Mascot scores. Quantifying heat shock cognate 71 kDa protein, glutathione S-transferase P, nucleoside diphosphate kinase A and annexin-2 by Western blotting using whole cell lysates indicated usefulness of Mascot scores for selecting the most efficient antibodies.
NASA Astrophysics Data System (ADS)
Moslemipour, Ghorbanali
2018-07-01
This paper aims at proposing a quadratic assignment-based mathematical model to deal with the stochastic dynamic facility layout problem. In this problem, product demands are assumed to be dependent normally distributed random variables with known probability density function and covariance that change from period to period at random. To solve the proposed model, a novel hybrid intelligent algorithm is proposed by combining the simulated annealing and clonal selection algorithms. The proposed model and the hybrid algorithm are verified and validated using design of experiment and benchmark methods. The results show that the hybrid algorithm has an outstanding performance from both solution quality and computational time points of view. Besides, the proposed model can be used in both of the stochastic and deterministic situations.
2012-01-01
Background Contrast-induced nephropathy is a common complication of contrast administration in patients with chronic kidney disease and diabetes. Its pathophysiology is not well understood; similarly the role of intravenous or oral acetylcysteine is unclear. Randomized controlled trials to date have been conducted without detailed knowledge of the effect of acetylcysteine on renal function. We are conducting a detailed mechanistic study of acetylcysteine on normal and impaired kidneys, both with and without contrast. This information would guide the choice of dose, route, and appropriate outcome measure for future clinical trials in patients with chronic kidney disease. Methods/Design We designed a 4-part study. We have set up randomised controlled cross-over studies to assess the effect of intravenous (50 mg/kg/hr for 2 hrs before contrast exposure, then 20 mg/kg/hr for 5 hrs) or oral acetylcysteine (1200 mg twice daily for 2 days, starting the day before contrast exposure) on renal function in normal and diseased kidneys, and normal kidneys exposed to contrast. We have also set up a parallel-group randomized controlled trial to assess the effect of intravenous or oral acetylcysteine on patients with chronic kidney disease stage III undergoing elective coronary angiography. The primary outcome is change in renal blood flow; secondary outcomes include change in glomerular filtration rate, tubular function, urinary proteins, and oxidative balance. Discussion Contrast-induced nephropathy represents a significant source of hospital morbidity and mortality. Over the last ten years, acetylcysteine has been administered prior to contrast to reduce the risk of contrast-induced nephropathy. Randomized controlled trials, however, have not reliably demonstrated renoprotection; a recent large randomized controlled trial assessing a dose of oral acetylcysteine selected without mechanistic insight did not reduce the incidence of contrast-induced nephropathy. Our study should reveal the mechanism of effect of acetylcysteine on renal function and identify an appropriate route for future dose response studies and in time randomized controlled trials. Trial registration Clinical Trials.gov: NCT00558142; EudraCT: 2006-003509-18. PMID:22305183
Gill, C O; McGinnis, J C; Bryant, J
1998-07-21
The microbiological effects on the product of the series of operations for skinning the hindquarters of beef carcasses at three packing plants were assessed. Samples were obtained at each plant from randomly selected carcasses, by swabbing specified sites related to opening cuts, rump skinning or flank skinning operations, randomly selected sites along the lines of the opening cuts, or randomly selected sites on the skinned hindquarters of carcasses. A set of 25 samples of each type was collected at each plant, with the collection of a single sample from each selected carcass. Aerobic counts, coliforms and Escherichia coli were enumerated in each sample, and a log mean value was estimated for each set of 25 counts on the assumption of a log normal distribution of the counts. The data indicated that the hindquarters skinning operations at plant A were hygienically inferior to those at the other two plants, with mean numbers of coliforms and E. coli being about two orders of magnitude greater, and aerobic counts being an order of magnitude greater on the skinned hindquarters of carcasses from plant A than on those from plants B or C. The data further indicated that the operation for cutting open the skin at plant C was hygienically superior to the equivalent operation at plant B, but that the operations for skinning the rump and flank at plant B were hygienically superior to the equivalent operations at plant C. The findings suggest that objective assessment of the microbiological effects on carcasses of beef carcass dressing processes will be required to ensure that Hazard Analysis: Critical Control Point and Quality Management Systems are operated to control the microbiological condition of carcasses.
Effect of Jiangzhi tablet on gastrointestinal propulsive function in mice
NASA Astrophysics Data System (ADS)
Wang, Xiangrong; Geng, Xiuli; Zhao, Jingsheng; Fan, Lili; Zhang, Zhengchen
2018-04-01
This paper aims to study the effect of lipid-lowering tablets on gastric emptying and small intestinal propulsion in mice. Mice were randomly divided into control group, Digestant Pill group, Jiangzhi tablet group, middle dose and small dose, the mice gastric emptying phenolsulfonphthalein, gastric residual rate of phenol red indicator to evaluate the gastric emptying rate, residual rate of detection in mouse stomach; small intestine propulsion and selection of carbon ink as the experimental index. Effects were observed to promote the function of normal mice gastric emptying and intestine. The gastric emptying and small intestinal motor function of normal mice were all promoted by each administration group, and the effect was most obvious in small dose group. The effect of reducing blood lipid on gastrointestinal motility of mice ware obviously enhanced.
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
Comparing the social skills of students addicted to computer games with normal students.
Zamani, Eshrat; Kheradmand, Ali; Cheshmi, Maliheh; Abedi, Ahmad; Hedayati, Nasim
2010-01-01
This study aimed to investigate and compare the social skills of studentsaddicted to computer games with normal students. The dependentvariable in the present study is the social skills. The study population included all the students in the second grade ofpublic secondary school in the city of Isfahan at the educational year of2009-2010. The sample size included 564 students selected using thecluster random sampling method. Data collection was conducted usingQuestionnaire of Addiction to Computer Games and Social SkillsQuestionnaire (The Teenage Inventory of Social Skill or TISS). The results of the study showed that generally, there was a significantdifference between the social skills of students addicted to computer gamesand normal students. In addition, the results indicated that normal studentshad a higher level of social skills in comparison with students addicted tocomputer games. As the study results showed, addiction to computer games may affectthe quality and quantity of social skills. In other words, the higher theaddiction to computer games, the less the social skills. The individualsaddicted to computer games have less social skills.).
Comparing the Social Skills of Students Addicted to Computer Games with Normal Students
Zamani, Eshrat; Kheradmand, Ali; Cheshmi, Maliheh; Abedi, Ahmad; Hedayati, Nasim
2010-01-01
Background This study aimed to investigate and compare the social skills of studentsaddicted to computer games with normal students. The dependentvariable in the present study is the social skills. Methods The study population included all the students in the second grade ofpublic secondary school in the city of Isfahan at the educational year of2009-2010. The sample size included 564 students selected using thecluster random sampling method. Data collection was conducted usingQuestionnaire of Addiction to Computer Games and Social SkillsQuestionnaire (The Teenage Inventory of Social Skill or TISS). Findings The results of the study showed that generally, there was a significantdifference between the social skills of students addicted to computer gamesand normal students. In addition, the results indicated that normal studentshad a higher level of social skills in comparison with students addicted tocomputer games. Conclusion As the study results showed, addiction to computer games may affectthe quality and quantity of social skills. In other words, the higher theaddiction to computer games, the less the social skills. The individualsaddicted to computer games have less social skills.). PMID:24494102
DAnTE: a statistical tool for quantitative analysis of –omics data
DOE Office of Scientific and Technical Information (OSTI.GOV)
Polpitiya, Ashoka D.; Qian, Weijun; Jaitly, Navdeep
2008-05-03
DAnTE (Data Analysis Tool Extension) is a statistical tool designed to address challenges unique to quantitative bottom-up, shotgun proteomics data. This tool has also been demonstrated for microarray data and can easily be extended to other high-throughput data types. DAnTE features selected normalization methods, missing value imputation algorithms, peptide to protein rollup methods, an extensive array of plotting functions, and a comprehensive ANOVA scheme that can handle unbalanced data and random effects. The Graphical User Interface (GUI) is designed to be very intuitive and user friendly.
NASA Astrophysics Data System (ADS)
Muttaqiin, A.; Sopandi, W.
2017-09-01
This research aimed to analyze the correlation between pre-classroom reading activity and students’ curiosity to science. 31 participants were selected randomly from one of the junior high schools in Cimahi. Spearman’s correlation was chosen since the data from two variables were not normally distributed. The result shows that there was a weak correlation between reading before learning and students’ curiosity in the classroom and it was not significant. Several factors influence this result, one of them was students’ reluctant in daily reading to science content.
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
Pressman, Abe; Moretti, Janina E; Campbell, Gregory W; Müller, Ulrich F; Chen, Irene A
2017-08-21
The emergence of catalytic RNA is believed to have been a key event during the origin of life. Understanding how catalytic activity is distributed across random sequences is fundamental to estimating the probability that catalytic sequences would emerge. Here, we analyze the in vitro evolution of triphosphorylating ribozymes and translate their fitnesses into absolute estimates of catalytic activity for hundreds of ribozyme families. The analysis efficiently identified highly active ribozymes and estimated catalytic activity with good accuracy. The evolutionary dynamics follow Fisher's Fundamental Theorem of Natural Selection and a corollary, permitting retrospective inference of the distribution of fitness and activity in the random sequence pool for the first time. The frequency distribution of rate constants appears to be log-normal, with a surprisingly steep dropoff at higher activity, consistent with a mechanism for the emergence of activity as the product of many independent contributions. © The Author(s) 2017. Published by Oxford University Press on behalf of Nucleic Acids Research.
NASA Astrophysics Data System (ADS)
Li, Zhe; Feng, Jinchao; Liu, Pengyu; Sun, Zhonghua; Li, Gang; Jia, Kebin
2018-05-01
Temperature is usually considered as a fluctuation in near-infrared spectral measurement. Chemometric methods were extensively studied to correct the effect of temperature variations. However, temperature can be considered as a constructive parameter that provides detailed chemical information when systematically changed during the measurement. Our group has researched the relationship between temperature-induced spectral variation (TSVC) and normalized squared temperature. In this study, we focused on the influence of temperature distribution in calibration set. Multi-temperature calibration set selection (MTCS) method was proposed to improve the prediction accuracy by considering the temperature distribution of calibration samples. Furthermore, double-temperature calibration set selection (DTCS) method was proposed based on MTCS method and the relationship between TSVC and normalized squared temperature. We compare the prediction performance of PLS models based on random sampling method and proposed methods. The results from experimental studies showed that the prediction performance was improved by using proposed methods. Therefore, MTCS method and DTCS method will be the alternative methods to improve prediction accuracy in near-infrared spectral measurement.
Assessment of pharmaceutical waste management at selected hospitals and homes in Ghana.
Sasu, Samuel; Kümmerer, Klaus; Kranert, Martin
2012-06-01
The practice of use and disposal of waste from pharmaceuticals compromises the safety of the environment as well as representing a serious health risk, as they may accumulate and stay active for a long time in the aquatic environment. This article therefore presents the outcome of a study on pharmaceutical waste management practices at homes and hospitals in Ghana. The study was conducted at five healthcare institutions randomly selected in Ghana, namely two teaching hospitals (hospital A, hospital B), one regional hospital (hospital C), one district hospital (hospital D) and one quasi-governmental hospital (hospital E). Apart from hospital E which currently has a pharmaceutical waste separation programmr as well as drug return programme called DUMP (Disposal of Unused Medicines Program), all other hospitals visited do not have any separate collection and disposal programme for pharmaceutical waste. A survey was also carried out among the general public, involving the questioning of randomly selected participants in order to investigate the household disposal of unused and expired pharmaceuticals. The results from the survey showed that more than half of the respondents confirmed having unused, left-over or expired medicines at home and over 75% disposed of pharmaceutical waste through the normal waste bins which end up in the landfills or dump sites.
Random Evolutionary Dynamics Driven by Fitness and House-of-Cards Mutations: Sampling Formulae
NASA Astrophysics Data System (ADS)
Huillet, Thierry E.
2017-07-01
We first revisit the multi-allelic mutation-fitness balance problem, especially when mutations obey a house of cards condition, where the discrete-time deterministic evolutionary dynamics of the allelic frequencies derives from a Shahshahani potential. We then consider multi-allelic Wright-Fisher stochastic models whose deviation to neutrality is from the Shahshahani mutation/selection potential. We next focus on the weak selection, weak mutation cases and, making use of a Gamma calculus, we compute the normalizing partition functions of the invariant probability densities appearing in their Wright-Fisher diffusive approximations. Using these results, generalized Ewens sampling formulae (ESF) from the equilibrium distributions are derived. We start treating the ESF in the mixed mutation/selection potential case and then we restrict ourselves to the ESF in the simpler house-of-cards mutations only situation. We also address some issues concerning sampling problems from infinitely-many alleles weak limits.
Zhu, Qiaohao; Carriere, K C
2016-01-01
Publication bias can significantly limit the validity of meta-analysis when trying to draw conclusion about a research question from independent studies. Most research on detection and correction for publication bias in meta-analysis focus mainly on funnel plot-based methodologies or selection models. In this paper, we formulate publication bias as a truncated distribution problem, and propose new parametric solutions. We develop methodologies of estimating the underlying overall effect size and the severity of publication bias. We distinguish the two major situations, in which publication bias may be induced by: (1) small effect size or (2) large p-value. We consider both fixed and random effects models, and derive estimators for the overall mean and the truncation proportion. These estimators will be obtained using maximum likelihood estimation and method of moments under fixed- and random-effects models, respectively. We carried out extensive simulation studies to evaluate the performance of our methodology, and to compare with the non-parametric Trim and Fill method based on funnel plot. We find that our methods based on truncated normal distribution perform consistently well, both in detecting and correcting publication bias under various situations.
NASA Technical Reports Server (NTRS)
Falls, L. W.; Crutcher, H. L.
1976-01-01
Transformation of statistics from a dimensional set to another dimensional set involves linear functions of the original set of statistics. Similarly, linear functions will transform statistics within a dimensional set such that the new statistics are relevant to a new set of coordinate axes. A restricted case of the latter is the rotation of axes in a coordinate system involving any two correlated random variables. A special case is the transformation for horizontal wind distributions. Wind statistics are usually provided in terms of wind speed and direction (measured clockwise from north) or in east-west and north-south components. A direct application of this technique allows the determination of appropriate wind statistics parallel and normal to any preselected flight path of a space vehicle. Among the constraints for launching space vehicles are critical values selected from the distribution of the expected winds parallel to and normal to the flight path. These procedures are applied to space vehicle launches at Cape Kennedy, Florida.
NASA Astrophysics Data System (ADS)
Rodríguez-Climent, Sílvia; Alcaraz, Carles; Caiola, Nuno; Ibáñez, Carles; Nebra, Alfonso; Muñoz-Camarillo, Gloria; Casals, Frederic; Vinyoles, Dolors; de Sostoa, Adolfo
2012-12-01
Multimesh nylon gillnets were set in three Ebro Delta (North-East of Spain) lagoons to determine mesh selectivity for the inhabiting fish community. Each gillnet consisted on a series of twelve panels of different mesh size (ranging from 5.0 to 55.0 mm bar length) randomly distributed. The SELECT method (Share Each Length's Catch Total) was used to estimate retention curves through five models: normal location, normal scale, gamma, lognormal and inverse Gaussian. Each model was fitted twice, under the assumptions of equal and proportional to mesh size fishing effort, but no differences were found between approaches. A possible situation of overfishing in the lagoons, where artisanal fisheries are carried out with a low surveillance effort, was assessed using a vulnerable species inhabiting these brackish waters as case study: the sand smelt, Atherina boyeri. The minimum size for its fishery has not been established, thus remaining under an uncontrolled exploitation situation. Therefore, a Minimum Landing Size (MLS) is proposed based on sexual maturity data. The importance of establishing an adequate MLS and regulate mesh sizes in order to respect natural maturation length is discussed, as well as, the proposal of other measures to improve A. boyeri fishery management.
Orlando, Paul A; Gatenby, Robert A; Brown, Joel S
2013-01-01
We apply competition colonization tradeoff models to tumor growth and invasion dynamics to explore the hypothesis that varying selection forces will result in predictable phenotypic differences in cells at the tumor invasive front compared to those in the core. Spatially, ecologically, and evolutionarily explicit partial differential equation models of tumor growth confirm that spatial invasion produces selection pressure for motile phenotypes. The effects of the invasive phenotype on normal adjacent tissue determine the patterns of growth and phenotype distribution. If tumor cells do not destroy their environment, colonizer and competitive phenotypes coexist with the former localized at the invasion front and the latter, to the tumor interior. If tumors cells do destroy their environment, then cell motility is strongly selected resulting in accelerated invasion speed with time. Our results suggest that the widely observed genetic heterogeneity within cancers may not be the stochastic effect of random mutations. Rather, it may be the consequence of predictable variations in environmental selection forces and corresponding phenotypic adaptations.
NASA Astrophysics Data System (ADS)
Davies, Paul; Demetrius, Lloyd A.; Tuszynski, Jack A.
2012-03-01
Empirical studies give increased support for the hypothesis that the sporadic form of cancer is an age-related metabolic disease characterized by: (a) metabolic dysregulation with random abnormalities in mitochondrial DNA, and (b) metabolic alteration - the compensatory upregulation of glycolysis to offset mitochondrial impairments. This paper appeals to the theory of Quantum Metabolism and the principles of natural selection to formulate a conceptual framework for a quantitative analysis of the origin and proliferation of the disease. Quantum Metabolism, an analytical theory of energy transduction in cells inspired by the methodology of the quantum theory of solids, elucidates the molecular basis for differences in metabolic rate between normal cells, utilizing predominantly oxidative phosphorylation, and cancer cells utilizing predominantly glycolysis. The principles of natural selection account for the outcome of competition between the two classes of cells. Quantum Metabolism and the principles of natural selection give an ontogenic and evolutionary rationale for cancer proliferation and furnish a framework for effective therapeutic strategies to impede the spread of the disease.
Development of Auditory Selective Attention: Why Children Struggle to Hear in Noisy Environments
2015-01-01
Children’s hearing deteriorates markedly in the presence of unpredictable noise. To explore why, 187 school-age children (4–11 years) and 15 adults performed a tone-in-noise detection task, in which the masking noise varied randomly between every presentation. Selective attention was evaluated by measuring the degree to which listeners were influenced by (i.e., gave weight to) each spectral region of the stimulus. Psychometric fits were also used to estimate levels of internal noise and bias. Levels of masking were found to decrease with age, becoming adult-like by 9–11 years. This change was explained by improvements in selective attention alone, with older listeners better able to ignore noise similar in frequency to the target. Consistent with this, age-related differences in masking were abolished when the noise was made more distant in frequency to the target. This work offers novel evidence that improvements in selective attention are critical for the normal development of auditory judgments. PMID:25706591
Orlando, Paul A.; Gatenby, Robert A.; Brown, Joel S.
2013-01-01
We apply competition colonization tradeoff models to tumor growth and invasion dynamics to explore the hypothesis that varying selection forces will result in predictable phenotypic differences in cells at the tumor invasive front compared to those in the core. Spatially, ecologically, and evolutionarily explicit partial differential equation models of tumor growth confirm that spatial invasion produces selection pressure for motile phenotypes. The effects of the invasive phenotype on normal adjacent tissue determine the patterns of growth and phenotype distribution. If tumor cells do not destroy their environment, colonizer and competitive phenotypes coexist with the former localized at the invasion front and the latter, to the tumor interior. If tumors cells do destroy their environment, then cell motility is strongly selected resulting in accelerated invasion speed with time. Our results suggest that the widely observed genetic heterogeneity within cancers may not be the stochastic effect of random mutations. Rather, it may be the consequence of predictable variations in environmental selection forces and corresponding phenotypic adaptations. PMID:23508890
Searching for patterns in remote sensing image databases using neural networks
NASA Technical Reports Server (NTRS)
Paola, Justin D.; Schowengerdt, Robert A.
1995-01-01
We have investigated a method, based on a successful neural network multispectral image classification system, of searching for single patterns in remote sensing databases. While defining the pattern to search for and the feature to be used for that search (spectral, spatial, temporal, etc.) is challenging, a more difficult task is selecting competing patterns to train against the desired pattern. Schemes for competing pattern selection, including random selection and human interpreted selection, are discussed in the context of an example detection of dense urban areas in Landsat Thematic Mapper imagery. When applying the search to multiple images, a simple normalization method can alleviate the problem of inconsistent image calibration. Another potential problem, that of highly compressed data, was found to have a minimal effect on the ability to detect the desired pattern. The neural network algorithm has been implemented using the PVM (Parallel Virtual Machine) library and nearly-optimal speedups have been obtained that help alleviate the long process of searching through imagery.
Computer simulation of the probability that endangered whales will interact with oil spills
DOE Office of Scientific and Technical Information (OSTI.GOV)
Reed, M.; Jayko, K.; Bowles, A.
1987-03-01
A numerical model system was developed to assess quantitatively the probability that endangered bowhead and gray whales will encounter spilled oil in Alaskan waters. Bowhead and gray whale migration and diving-surfacing models, and an oil-spill trajectory model comprise the system. The migration models were developed from conceptual considerations, then calibrated with and tested against observations. The movement of a whale point is governed by a random walk algorithm which stochastically follows a migratory pathway. The oil-spill model, developed under a series of other contracts, accounts for transport and spreading behavior in open water and in the presence of sea ice.more » Historical wind records and heavy, normal, or light ice cover data sets are selected at random to provide stochastic oil-spill scenarios for whale-oil interaction simulations.« less
Xiong, Menghua; Bao, Yan; Xu, Xin; Wang, Hua; Han, Zhiyuan; Wang, Zhiyu; Liu, Yeqing; Huang, Songyin; Song, Ziyuan; Chen, Jinjing; Peek, Richard M.; Yin, Lichen; Chen, Lin-Feng; Cheng, Jianjun
2017-01-01
Current clinical treatment of Helicobacter pylori infection, the main etiological factor in the development of gastritis, gastric ulcers, and gastric carcinoma, requires a combination of at least two antibiotics and one proton pump inhibitor. However, such triple therapy suffers from progressively decreased therapeutic efficacy due to the drug resistance and undesired killing of the commensal bacteria due to poor selectivity. Here, we report the development of antimicrobial polypeptide-based monotherapy, which can specifically kill H. pylori under acidic pH in the stomach while inducing minimal toxicity to commensal bacteria under physiological pH. Specifically, we designed a class of pH-sensitive, helix–coil conformation transitionable antimicrobial polypeptides (HCT-AMPs) (PGA)m-r-(PHLG-MHH)n, bearing randomly distributed negatively charged glutamic acid and positively charged poly(γ-6-N-(methyldihexylammonium)hexyl-l-glutamate) (PHLG-MHH) residues. The HCT-AMPs showed unappreciable toxicity at physiological pH when they adopted random coiled conformation. Under acidic condition in the stomach, they transformed to the helical structure and exhibited potent antibacterial activity against H. pylori, including clinically isolated drug-resistant strains. After oral gavage, the HCT-AMPs afforded comparable H. pylori killing efficacy to the triple-therapy approach while inducing minimal toxicity against normal tissues and commensal bacteria, in comparison with the remarkable killing of commensal bacteria by 65% and 86% in the ileal contents and feces, respectively, following triple therapy. This strategy renders an effective approach to specifically target and kill H. pylori in the stomach while not harming the commensal bacteria/normal tissues. PMID:29133389
Koloušková, Pavla; Stone, James D.
2017-01-01
Accurate gene expression measurements are essential in studies of both crop and wild plants. Reverse transcription quantitative real-time PCR (RT-qPCR) has become a preferred tool for gene expression estimation. A selection of suitable reference genes for the normalization of transcript levels is an essential prerequisite of accurate RT-qPCR results. We evaluated the expression stability of eight candidate reference genes across roots, leaves, flower buds and pollen of Silene vulgaris (bladder campion), a model plant for the study of gynodioecy. As random priming of cDNA is recommended for the study of organellar transcripts and poly(A) selection is indicated for nuclear transcripts, we estimated gene expression with both random-primed and oligo(dT)-primed cDNA. Accordingly, we determined reference genes that perform well with oligo(dT)- and random-primed cDNA, making it possible to estimate levels of nucleus-derived transcripts in the same cDNA samples as used for organellar transcripts, a key benefit in studies of cyto-nuclear interactions. Gene expression variance was estimated by RefFinder, which integrates four different analytical tools. The SvACT and SvGAPDH genes were the most stable candidates across various organs of S. vulgaris, regardless of whether pollen was included or not. PMID:28817728
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
Kaitaniemi, Pekka
2008-04-09
Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G.; Shah, Arvind K.; Lin, Jianxin
2013-01-01
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data (IPD) in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the Deviance Information Criterion (DIC) is used to select the best transformation model. Since the model is quite complex, a novel Monte Carlo Markov chain (MCMC) sampling scheme is developed to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol lowering drugs where the goal is to jointly model the three dimensional response consisting of Low Density Lipoprotein Cholesterol (LDL-C), High Density Lipoprotein Cholesterol (HDL-C), and Triglycerides (TG) (LDL-C, HDL-C, TG). Since the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately: however, a multivariate approach would be more appropriate since these variables are correlated with each other. A detailed analysis of these data is carried out using the proposed methodology. PMID:23580436
Kim, Sungduk; Chen, Ming-Hui; Ibrahim, Joseph G; Shah, Arvind K; Lin, Jianxin
2013-10-15
In this paper, we propose a class of Box-Cox transformation regression models with multidimensional random effects for analyzing multivariate responses for individual patient data in meta-analysis. Our modeling formulation uses a multivariate normal response meta-analysis model with multivariate random effects, in which each response is allowed to have its own Box-Cox transformation. Prior distributions are specified for the Box-Cox transformation parameters as well as the regression coefficients in this complex model, and the deviance information criterion is used to select the best transformation model. Because the model is quite complex, we develop a novel Monte Carlo Markov chain sampling scheme to sample from the joint posterior of the parameters. This model is motivated by a very rich dataset comprising 26 clinical trials involving cholesterol-lowering drugs where the goal is to jointly model the three-dimensional response consisting of low density lipoprotein cholesterol (LDL-C), high density lipoprotein cholesterol (HDL-C), and triglycerides (TG) (LDL-C, HDL-C, TG). Because the joint distribution of (LDL-C, HDL-C, TG) is not multivariate normal and in fact quite skewed, a Box-Cox transformation is needed to achieve normality. In the clinical literature, these three variables are usually analyzed univariately; however, a multivariate approach would be more appropriate because these variables are correlated with each other. We carry out a detailed analysis of these data by using the proposed methodology. Copyright © 2013 John Wiley & Sons, Ltd.
Missing Value Imputation Approach for Mass Spectrometry-based Metabolomics Data.
Wei, Runmin; Wang, Jingye; Su, Mingming; Jia, Erik; Chen, Shaoqiu; Chen, Tianlu; Ni, Yan
2018-01-12
Missing values exist widely in mass-spectrometry (MS) based metabolomics data. Various methods have been applied for handling missing values, but the selection can significantly affect following data analyses. Typically, there are three types of missing values, missing not at random (MNAR), missing at random (MAR), and missing completely at random (MCAR). Our study comprehensively compared eight imputation methods (zero, half minimum (HM), mean, median, random forest (RF), singular value decomposition (SVD), k-nearest neighbors (kNN), and quantile regression imputation of left-censored data (QRILC)) for different types of missing values using four metabolomics datasets. Normalized root mean squared error (NRMSE) and NRMSE-based sum of ranks (SOR) were applied to evaluate imputation accuracy. Principal component analysis (PCA)/partial least squares (PLS)-Procrustes analysis were used to evaluate the overall sample distribution. Student's t-test followed by correlation analysis was conducted to evaluate the effects on univariate statistics. Our findings demonstrated that RF performed the best for MCAR/MAR and QRILC was the favored one for left-censored MNAR. Finally, we proposed a comprehensive strategy and developed a public-accessible web-tool for the application of missing value imputation in metabolomics ( https://metabolomics.cc.hawaii.edu/software/MetImp/ ).
Effects of different sleep deprivation protocols on sleep perception in healthy volunteers.
Goulart, Leonardo I; Pinto, Luciano R; Perlis, Michael L; Martins, Raquel; Caboclo, Luis Otavio; Tufik, Sergio; Andersen, Monica L
2014-10-01
To investigate whether different protocols of sleep deprivation modify sleep perception. The effects of total sleep deprivation (TD) and selective rapid eye movement (REM) sleep deprivation (RD) on sleep perception were analyzed in normal volunteers. Thirty-one healthy males with normal sleep were randomized to one of three conditions: (i) normal uninterrupted sleep; (ii) four nights of RD; or (iii) two nights of TD. Morning perception of total sleep time was evaluated for each condition. Sleep perception was estimated using total sleep time (in hours) as perceived by the volunteer divided by the total sleep time (in hours) measured by polysomnography (PSG). The final value of this calculation was defined as the perception index (PI). There were no significant differences among the three groups of volunteers in the total sleep time measured by PSG or in the perception of total sleep time at baseline condition. Volunteers submitted to RD exhibited lower sleep PI scores as compared with controls during the sleep deprivation period (P <0.05). Both RD and TD groups showed PI similar to controls during the recovery period. Selective REM sleep deprivation reduced the ability of healthy young volunteers to perceive their total sleep time when compared with time measured by PSG. The data reinforce the influence of sleep deprivation on sleep perception. Copyright © 2014 Elsevier B.V. All rights reserved.
Selecting promising treatments in randomized Phase II cancer trials with an active control.
Cheung, Ying Kuen
2009-01-01
The primary objective of Phase II cancer trials is to evaluate the potential efficacy of a new regimen in terms of its antitumor activity in a given type of cancer. Due to advances in oncology therapeutics and heterogeneity in the patient population, such evaluation can be interpreted objectively only in the presence of a prospective control group of an active standard treatment. This paper deals with the design problem of Phase II selection trials in which several experimental regimens are compared to an active control, with an objective to identify an experimental arm that is more effective than the control or to declare futility if no such treatment exists. Conducting a multi-arm randomized selection trial is a useful strategy to prioritize experimental treatments for further testing when many candidates are available, but the sample size required in such a trial with an active control could raise feasibility concerns. In this study, we extend the sequential probability ratio test for normal observations to the multi-arm selection setting. The proposed methods, allowing frequent interim monitoring, offer high likelihood of early trial termination, and as such enhance enrollment feasibility. The termination and selection criteria have closed form solutions and are easy to compute with respect to any given set of error constraints. The proposed methods are applied to design a selection trial in which combinations of sorafenib and erlotinib are compared to a control group in patients with non-small-cell lung cancer using a continuous endpoint of change in tumor size. The operating characteristics of the proposed methods are compared to that of a single-stage design via simulations: The sample size requirement is reduced substantially and is feasible at an early stage of drug development.
Singh, Siddharth; Facciorusso, Antonio; Singh, Abha G; Casteele, Niels Vande; Zarrinpar, Amir; Prokop, Larry J; Grunvald, Eduardo L; Curtis, Jeffrey R; Sandborn, William J
2018-01-01
We sought to evaluate the association between obesity and response to anti-tumor necrosis factor-α (TNF) agents, through a systematic review and meta-analysis. Through a systematic search through January 24, 2017, we identified randomized controlled trials (RCTs) or observational studies in adults with select immune-mediated inflammatory diseases-inflammatory bowel diseases (IBD), rheumatoid arthritis (RA), spondyloarthropathies (SpA), psoriasis and psoriatic arthritis (PsA)-treated with anti-TNF agents, and reporting outcomes, stratified by body mass index (BMI) categories or weight. Primary outcome was failure to achieve clinical remission or response or treatment modification. We performed random effects meta-analysis and estimated odds ratios (OR) and 95% confidence interval (CI). Based on 54 cohorts including 19,372 patients (23% obese), patients with obesity had 60% higher odds of failing therapy (OR,1.60; 95% CI,1.39-1.83;I2 = 71%). Dose-response relationship was observed (obese vs. normal BMI: OR,1.87 [1.39-2.52]; overweight vs. normal BMI: OR,1.38 [1.11-1.74],p = 0.11); a 1kg/m2 increase in BMI was associated with 6.5% higher odds of failure (OR,1.065 [1.043-1.087]). These effects were observed across patients with rheumatic diseases, but not observed in patients with IBD. Effect was consistent based on dosing regimen/route, study design, exposure definition, and outcome measures. Less than 10% eligible RCTs reported outcomes stratified by BMI. Obesity is an under-reported predictor of inferior response to anti-TNF agents in patients with select immune-mediated inflammatory diseases. A thorough evaluation of obesity as an effect modifier in clinical trials is warranted, and intentional weight loss may serve as adjunctive treatment in patients with obesity failing anti-TNF therapy.
Gutnik, L; Lee, C; Msosa, J
2017-06-01
Breast cancer awareness and early detection are limited in Sub-Saharan Africa. Resource limitations make screening mammography or clinical breast examination (CBE) by physicians or nurses impractical in many settings. Four laywomen were trained to deliver breast cancer educational talks and conduct CBE. After training, screening was implemented in diverse urban health clinics. Eligible women were 30 years old, with no prior breast cancer or breast surgery, and clinic attendance for reasons other than a breast concern. Women with abnormal CBE were referred to a study surgeon. All palpable masses confirmed by surgeon examination were pathologically sampled. Patients with abnormal screening CBE but normal surgeon examination underwent breast ultrasound confirmation. In addition, 50 randomly selected women with normal screening CBE underwent breast ultrasound, and 45 different women with normal CBE were randomly assigned to surgeon examination. Among 1220 eligible women, 1000 (82%) agreed to CBE. Lack of time (69%) was the commonest reason for refusal. Educational talk attendance was associated with higher CBE participation (83% versus 77%, P ¼ 0.012). Among 1000 women screened, 7% had abnormal CBE. Of 45 women with normal CBE randomised to physician examination, 43 had normal examinations and two had axillary lymphadenopathy not detected by CBE. Sixty of 67 women (90%) with abnormal CBE attended the referral visit. Of these, 29 (48%) had concordant abnormal physician examination. Thirty-one women (52%) had discordant normal physician examination, all of whom also had normal breast ultrasounds. Compared with physician examination, sensitivity for CBE by laywomen was 94% (confidence interval [CI] 79%-99%), specificity 58% (CI, 46%-70%), positive predictive value 48% (CI, 35%-62%), and negative predictive value 96% (CI, 85%-100%). Of 13 women who underwent recommended pathologic sampling of a breast lesion, two had cytologic dysplasia and all others benign Results. CBE uptake in Lilongwe clinics was high. CBE by laywomen compared favourably with physician examination and followup was good. Our intervention can serve as a model for wider implementation. Performance in rural areas, effects on cancer stage and mortality, and cost effectiveness require evaluation.
Zeyneloglu, H B; Baltaci, V; Ege, S; Haberal, A; Batioglu, S
2000-04-01
If randomly selected immotile spermatozoa are used for intracytoplasmic sperm injection (ICSI), pregnancy rates are significantly decreased. The hypo-osmotic swelling test (HOST) is the only method available to detect the viable, but immotile spermatozoa for ICSI. However, evidence is still lacking for the chromosomal abnormalities for the normal-looking, but immotile spermatozoa positive for HOST. Sperm samples from 20 infertile men with normal chromosomal constitution were obtained. After Percoll separation, morphologically normal but immotile spermatozoa were transported individually into HOST solution for 1 min using micropipettes. Cells that showed tail curling with swelling in HOST were then transferred back into human tubal fluid solution to allow reversal of swelling. These sperm cells were fixed and processed for the multi-colour fluorescence in-situ hybridization (FISH) for chromosomes X, Y and 18. The same FISH procedure was applied for the motile spermatozoa from the same cohort, which formed the control group. The average aneuploidy rates were 1.70 and 1.54% in 1000 HOST positive immotile and motile spermatozoa respectively detected by FISH for each patient. Our results indicate that morphologically normal, immotile but viable spermatozoa have an aneuploidy rate similar to that of normal motile spermatozoa.
Burger, C W; Korsen, T; van Kessel, H; van Dop, P A; Caron, F J; Schoemaker, J
1985-12-01
To characterize the oscillations of plasma LH in normally cycling and amenorrheic women, three groups of women were studied: I, normal women during the follicular phase of the cycle (n = 9); II, women with polycystic ovarian disease (PCOD; n = 11); and III, women with non-PCOD secondary amenorrhea (n = 12). Blood samples were obtained at 10-min intervals for 6 h on 2 separate days. A pulse was defined as an increase in LH at least 20% over the preceding lowest value (nadir). Since LHRH release immediately follows the nadir of the LH levels, the nadir interval (NI) was used for analysis. For analysis, the results from 1 day were selected at random from each subject, and from each day, the same number of NIs also were randomly selected. When two NIs from each patient were selected, the median NI was 75 min in group I, 45 min in group II, and 45 min in group III. When three or four NIs were chosen, the median NI was 60 min in group I, 50 min in group II, and 40 min in group III. The differences between the groups were statistically significant. When three NIs were selected, the mean of the corresponding LH amplitudes was 2.8 U/liter in group I, 6.0 U/liter in group II, and 1.5 U/liter in group III. The differences between these groups were statistically significant. Thus, the NI in PCOD patients was shorter than that during the follicular phase of the cycle, but this short NI is not unique for PCOD, since the NI in non-PCOD secondary amenorrhea patients was even smaller. The LH amplitude was higher in PCOD and lower in non-PCOD secondary amenorrhea compared to that during the follicular phase of the cycle. The decrease in NI in PCOD and/or non-PCOD secondary amenorrhea vs. the NI of the follicular phase could be explained by either a higher frequency of LHRH pulses from the hypothalamus or an increased sensitivity of the pituitary leading to a greater response of the pituitary to LHRH pulses.
Beggs, Peter W; Clark, David WJ; Williams, Sheila M; Coulter, David M
1999-01-01
Aims Because of the importance of treating dyslipidaemia in the prevention of ischaemic heart disease and because patient selection criteria and outcomes in clinical trials do not necessarily reflect what happens in normal clinical practice, we compared outcomes from bezafibrate, gemfibrozil and simvastatin therapy under conditions of normal use. Methods A random sample of 200 patients was selected from the New Zealand Intensive Medicines Monitoring Programme’s (IMMP) patient cohorts for each drug. Questionnaires sent to prescribers requested information on indications, risk factors for ischaemic heart disease, lipid profiles with changes during treatment and reasons for stopping therapy. Results 80% of prescribers replied and 83% of these contained useful information. The three groups were similar for age, sex and geographical region, but significantly more patients on bezafibrate had diabetes and/or hypertension than those on gemfibrozil or simvastatin. After treatment and taking the initial measure into account, the changes in serum lipid values were consistent with those generally observed, but with gemfibrozil being significantly less effective than expected. More patients (15.8%) stopped gemfibrozil because of an inadequate response compared with bezafibrate (5.4%) and simvastatin (1.6%). Gemfibrozil treatment was also withdrawn significantly more frequently due to a possible adverse reaction compared with the other two drugs. Conclusions In normal clinical practice in New Zealand gemfibrozil appears less effective and more frequently causes adverse effects leading to withdrawal of treatment than either bezafibrate or simvastatin. PMID:10073746
Cano, Maya E; Class, Quetzal A; Polich, John
2009-01-01
Pictures from the International Affective Picture System (IAPS) were selected to manipulate affective valence (unpleasant, neutral, pleasant) while keeping arousal level the same. The pictures were presented in an oddball paradigm, with a visual pattern used as the standard stimulus. Subjects pressed a button whenever a target was detected. Experiment 1 presented normal pictures in color and black/white. Control stimuli were constructed for both the color and black/white conditions by randomly rearranging 1 cm square fragments of each original picture to produce a "scrambled" image. Experiment 2 presented the same normal color pictures with large, medium, and small scrambled condition (2, 1, and 0.5 cm squares). The P300 event-related brain potential demonstrated larger amplitudes over frontal areas for positive compared to negative or neutral images for normal color pictures in both experiments. Attenuated and nonsignificant valence effects were obtained for black/white images. Scrambled stimuli in each study yielded no valence effects but demonstrated typical P300 topography that increased from frontal to parietal areas. The findings suggest that P300 amplitude is sensitive to affective picture valence in the absence of stimulus arousal differences, and that stimulus color contributes to ERP valence effects.
Pulsed Nd:YAG laser selective ablation of surface enamel caries: II. Histology and clinical trials
NASA Astrophysics Data System (ADS)
Harris, David M.; Goodis, Harold E.; White, Joel M.; Arcoria, Charles J.; Simon, James; Burkart, John; Yessik, Michael J.; Myers, Terry D.
2000-03-01
High intensity infrared light from the pulsed Nd:YAG dental laser is absorbed by pigmented carious enamel and not absorbed by normal enamel. Therefore, this system is capable of selective removal of surface enamel caries. Safety and efficacy of the clinical procedure was evaluated in two sets of clinical trials at three dental schools. Carious lesions were randomized to drill or laser treatment. Pulp vitality, surface condition, preparations and restorations were evaluated by blinded evaluators. In Study 1 surface caries were removed from 104 third molars scheduled for extraction. One week post-treatment teeth were extracted and the pulp was examined histologically. In Study 2 90 patients with 422 lesions on 376 teeth were randomized to laser or drill and followed for six months. There were no adverse events and both clinical and histological evaluations of pulp vitality showed no abnormalities. Caries were removed in all conditions. A significantly greater number of preparations in the drill groups vs. laser groups entered dentin (drill equals 11, laser equals 1, p less than 0.001). This indicates that the more conservative laser treatment removed the caries but not the sound enamel below the lesion.
Pritikin, Joshua N; Brick, Timothy R; Neale, Michael C
2018-04-01
A novel method for the maximum likelihood estimation of structural equation models (SEM) with both ordinal and continuous indicators is introduced using a flexible multivariate probit model for the ordinal indicators. A full information approach ensures unbiased estimates for data missing at random. Exceeding the capability of prior methods, up to 13 ordinal variables can be included before integration time increases beyond 1 s per row. The method relies on the axiom of conditional probability to split apart the distribution of continuous and ordinal variables. Due to the symmetry of the axiom, two similar methods are available. A simulation study provides evidence that the two similar approaches offer equal accuracy. A further simulation is used to develop a heuristic to automatically select the most computationally efficient approach. Joint ordinal continuous SEM is implemented in OpenMx, free and open-source software.
Sun, Beibei; Zhang, Xiaohuan; Yin, Yanyan; Sun, Hualei; Ge, Huina; Li, Wenjie
2017-12-01
To investigate the effects of sulforaphane (SFN) and vitamin E (VE) on spatial learning and memory ability and oxidative damage of hippocampus in lead-exposed mice at lactation. A total of 18 adult Kunming mice, all 12 female mice were divided into two groups by body weight randomly, 10 mice drank water containing 0.2% lead acetate at lactation, the other 2 mice drank lead free deionized water named as the normal group. Then, they were mated at a 1:2 ratio of male to female. After weaning, the pups were divided into 5 groups by weight randomly (10 each group): normal saline (NS) group, corn oil (CO) group, SFN group, VE group and SFN+VE group. They were subject to gavage daily for four weeks. Gavage doses of SFN and VE were 25mg/kg and 30 IU/kg respectively. Meanwhile, 10 pups of the normal group were selected randomly as the control (C) group. The C group was normally raised for 4 weeks. The spatial learning and memory ability of them were evaluated by the Morris water maze test, and the lead level in the blood was determined by polarography. Superoxide dismutase (SOD) activity and malondialdehyde (MDA) level in hippocampus were measured by the kits. Compared with the NS and CO groups, the lead level in the blood of SFN and SFN+VE group had a significant decrease. In water maze test, the mice treated with SFN or/and VE performed better than mice of the NS and CO groups. In addition, a remarkable decrease in MDA level was found in mice treated with SFN or/and VE than those in NS and CO groups. What's more, there was no statistical distinction of SOD activity in SFN group than that of NS group. SOD activity significantly increased was observed in VE and SFN+VE groups than that of CO group. Sulforaphane and vitamin E could ameliorate cognitive decline and oxidative damage in pups with lead exposure at lactation from maternal milk. Copyright © 2017 Elsevier GmbH. All rights reserved.
Dissecting protein:protein interactions between transcription factors with an RNA aptamer.
Tian, Y; Adya, N; Wagner, S; Giam, C Z; Green, M R; Ellington, A D
1995-01-01
Nucleic acid aptamers isolated from random sequence pools have generally proven useful at inhibiting the interactions of nucleic acid binding proteins with their cognate nucleic acids. In order to develop reagents that could also be used to study protein:protein interactions, we have used in vitro selection to search for RNA aptamers that could interact with the transactivating protein Tax from human T-cell leukemia virus. Tax does not normally bind to nucleic acids, but instead stimulates transcription by interacting with a variety of cellular transcription factors, including the cyclic AMP-response element binding protein (CREB), NF-kappa B, and the serum response factor (SRF). Starting from a pool of greater than 10(13) different RNAs with a core of 120 random sequence positions, RNAs were selected for their ability to be co-retained on nitrocellulose filters with Tax. After five cycles of selection and amplification, a single nucleic acid species remained. This aptamer was found to bind Tax with high affinity and specificity, and could disrupt complex formation between Tax and NF-kappa B, but not with SRF. The differential effects of our aptamer probe on protein:protein interactions suggest a model for how the transcription factor binding sites on the surface of the Tax protein are organized. This model is consistent with data from a variety of other studies. PMID:7489503
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
[Corifollitropin alfa in women stimulated for the first time in in vitro fertilization programme].
Vraná-Mardešićová, N; Vobořil, J; Melicharová, L; Jelínková, L; Vilímová, Š; Mardešić, T
2017-01-01
To compare results after stimulation with corifollitropin alfa (Elonva) in unselected group of women entering for the first time in in vitro fertilization programme (IVF) with results from Phase III randomized trials with selected groups of women. Prospective study. Sanatorium Pronatal, Praha. 40 unselected women with adequat ovarian reserve entering for the first time in IVF programme were stimulated with corifollitropin alfa and GnRH antagonists. Avarage age in the study group was 32,8 years (29-42 years), women younger then 36 and less then 60 kg received Elonva 100 µg , all others (age > 36 let, weight > 60 kg) Elonva 150 µg. Five days after egg retrieval one blastocyst was transferred (single embryo transfer - eSET). Our results were compared with the resuls in higly selected groups of women from Phase III randomized trials. After stimulation with corifollitropin alfa and GnRH antagonists on average 10,6 (9,2 ± 4,2) eggs could be retrieved, among them 7,3 (6,6 ± 3,9) were M II oocytes (68,9%) and fertilisation rate was 84,6%. After first embryo transfer ("fresh" embryos and embryos from "freeze all" cycles) 14 pregnancies were achieved (37,8%), three pregnancies were achieved later from transfer of frozen-thawed embryos (cumulative pregnancy rate 45,9%). There were three abortions. No severe hyperstimulation syndrom occured. Our results in unselected group of women stimulated for the first in an IVF programme with corifollitropin alfa are fully comparable with results published in randomized trials with selected group of patiens. Corifollitropin alfa in combination with daily GnRH antagonist can be successfully used in normal-responder patients stimulated for the first time in an IVF programmeKeywords: corifollitropin alfa, GnRH antagonists, ovarian stimulation, pregnancy.
Semiparametric Bayesian classification with longitudinal markers
De la Cruz-Mesía, Rolando; Quintana, Fernando A.; Müller, Peter
2013-01-01
Summary We analyse data from a study involving 173 pregnant women. The data are observed values of the β human chorionic gonadotropin hormone measured during the first 80 days of gestational age, including from one up to six longitudinal responses for each woman. The main objective in this study is to predict normal versus abnormal pregnancy outcomes from data that are available at the early stages of pregnancy. We achieve the desired classification with a semiparametric hierarchical model. Specifically, we consider a Dirichlet process mixture prior for the distribution of the random effects in each group. The unknown random-effects distributions are allowed to vary across groups but are made dependent by using a design vector to select different features of a single underlying random probability measure. The resulting model is an extension of the dependent Dirichlet process model, with an additional probability model for group classification. The model is shown to perform better than an alternative model which is based on independent Dirichlet processes for the groups. Relevant posterior distributions are summarized by using Markov chain Monte Carlo methods. PMID:24368871
Optimal Scaling of Digital Transcriptomes
Glusman, Gustavo; Caballero, Juan; Robinson, Max; Kutlu, Burak; Hood, Leroy
2013-01-01
Deep sequencing of transcriptomes has become an indispensable tool for biology, enabling expression levels for thousands of genes to be compared across multiple samples. Since transcript counts scale with sequencing depth, counts from different samples must be normalized to a common scale prior to comparison. We analyzed fifteen existing and novel algorithms for normalizing transcript counts, and evaluated the effectiveness of the resulting normalizations. For this purpose we defined two novel and mutually independent metrics: (1) the number of “uniform” genes (genes whose normalized expression levels have a sufficiently low coefficient of variation), and (2) low Spearman correlation between normalized expression profiles of gene pairs. We also define four novel algorithms, one of which explicitly maximizes the number of uniform genes, and compared the performance of all fifteen algorithms. The two most commonly used methods (scaling to a fixed total value, or equalizing the expression of certain ‘housekeeping’ genes) yielded particularly poor results, surpassed even by normalization based on randomly selected gene sets. Conversely, seven of the algorithms approached what appears to be optimal normalization. Three of these algorithms rely on the identification of “ubiquitous” genes: genes expressed in all the samples studied, but never at very high or very low levels. We demonstrate that these include a “core” of genes expressed in many tissues in a mutually consistent pattern, which is suitable for use as an internal normalization guide. The new methods yield robustly normalized expression values, which is a prerequisite for the identification of differentially expressed and tissue-specific genes as potential biomarkers. PMID:24223126
The way to uncover community structure with core and diversity
NASA Astrophysics Data System (ADS)
Chang, Y. F.; Han, S. K.; Wang, X. D.
2018-07-01
Communities are ubiquitous in nature and society. Individuals that share common properties often self-organize to form communities. Avoiding the shortages of computation complexity, pre-given information and unstable results in different run, in this paper, we propose a simple and efficient method to deepen our understanding of the emergence and diversity of communities in complex systems. By introducing the rational random selection, our method reveals the hidden deterministic and normal diverse community states of community structure. To demonstrate this method, we test it with real-world systems. The results show that our method could not only detect community structure with high sensitivity and reliability, but also provide instructional information about the hidden deterministic community world and the real normal diverse community world by giving out the core-community, the real-community, the tide and the diversity. Thizs is of paramount importance in understanding, predicting, and controlling a variety of collective behaviors in complex systems.
On the generation of log-Lévy distributions and extreme randomness
NASA Astrophysics Data System (ADS)
Eliazar, Iddo; Klafter, Joseph
2011-10-01
The log-normal distribution is prevalent across the sciences, as it emerges from the combination of multiplicative processes and the central limit theorem (CLT). The CLT, beyond yielding the normal distribution, also yields the class of Lévy distributions. The log-Lévy distributions are the Lévy counterparts of the log-normal distribution, they appear in the context of ultraslow diffusion processes, and they are categorized by Mandelbrot as belonging to the class of extreme randomness. In this paper, we present a natural stochastic growth model from which both the log-normal distribution and the log-Lévy distributions emerge universally—the former in the case of deterministic underlying setting, and the latter in the case of stochastic underlying setting. In particular, we establish a stochastic growth model which universally generates Mandelbrot’s extreme randomness.
Sehgal, Vasudha; Seviour, Elena G; Moss, Tyler J; Mills, Gordon B; Azencott, Robert; Ram, Prahlad T
2015-01-01
MicroRNAs (miRNAs) play a crucial role in the maintenance of cellular homeostasis by regulating the expression of their target genes. As such, the dysregulation of miRNA expression has been frequently linked to cancer. With rapidly accumulating molecular data linked to patient outcome, the need for identification of robust multi-omic molecular markers is critical in order to provide clinical impact. While previous bioinformatic tools have been developed to identify potential biomarkers in cancer, these methods do not allow for rapid classification of oncogenes versus tumor suppressors taking into account robust differential expression, cutoffs, p-values and non-normality of the data. Here, we propose a methodology, Robust Selection Algorithm (RSA) that addresses these important problems in big data omics analysis. The robustness of the survival analysis is ensured by identification of optimal cutoff values of omics expression, strengthened by p-value computed through intensive random resampling taking into account any non-normality in the data and integration into multi-omic functional networks. Here we have analyzed pan-cancer miRNA patient data to identify functional pathways involved in cancer progression that are associated with selected miRNA identified by RSA. Our approach demonstrates the way in which existing survival analysis techniques can be integrated with a functional network analysis framework to efficiently identify promising biomarkers and novel therapeutic candidates across diseases.
Neutron monitor generated data distributions in quantum variational Monte Carlo
NASA Astrophysics Data System (ADS)
Kussainov, A. S.; Pya, N.
2016-08-01
We have assessed the potential applications of the neutron monitor hardware as random number generator for normal and uniform distributions. The data tables from the acquisition channels with no extreme changes in the signal level were chosen as the retrospective model. The stochastic component was extracted by fitting the raw data with splines and then subtracting the fit. Scaling the extracted data to zero mean and variance of one is sufficient to obtain a stable standard normal random variate. Distributions under consideration pass all available normality tests. Inverse transform sampling is suggested to use as a source of the uniform random numbers. Variational Monte Carlo method for quantum harmonic oscillator was used to test the quality of our random numbers. If the data delivery rate is of importance and the conventional one minute resolution neutron count is insufficient, we could always settle for an efficient seed generator to feed into the faster algorithmic random number generator or create a buffer.
[Effects of Liangxue Jiedu Decoction in treating psoriasis in a mouse psoriasis model].
Gu, Min-Jie; Gao, Shang-Pu; Li, Yong-Mei
2009-06-01
To study the effects of Liangxue Jiedu Decoction, a compound traditional Chinese herbal medicine with the function of blood-cooling and detoxicating, in treating psoriasis in mice and to explore its mechanism. (1) Sixty mice were randomly divided into Liangxue Jiedu Decoction group, compound Indigo Naturalis capsule group, acitretin capsule group and normal saline group. Another 10 mice were selected as blank control. After 2-week administration, mice were sacrificed to obtain samples. After hematoxylin and eosin (HE) staining, tail scales with granular layers were calculated by an optical microscope. (2) Except for ten mice in blank group, sixty female mice were injected intraperitoneally with diethylstilbestrol once daily. After 3-day injection, mice were randomly divided into four groups and treated as above description. After 2-week treatment, all mice were injected intraperitoneally with colchicine (2 mg/kg), and sacrificed 6 h after the injection. The mitotic rate in virginal epithelium was calculated after HE staining. Compared with normal saline, Liangxue Jiedu Decoction could significantly inhibit the mitosis of mouse vaginal epithelium (P < 0.01) and promote the formation of granular layers in mouse tail-scale epidermis (P < 0.01). The mechanism of Liangxue Jiedu Decoction in treating psoriasis may be related to promoting granular cell growth and inhibiting proliferation of epidermic cells.
Quantiles for Finite Mixtures of Normal Distributions
ERIC Educational Resources Information Center
Rahman, Mezbahur; Rahman, Rumanur; Pearson, Larry M.
2006-01-01
Quantiles for finite mixtures of normal distributions are computed. The difference between a linear combination of independent normal random variables and a linear combination of independent normal densities is emphasized. (Contains 3 tables and 1 figure.)
Clustering of galaxies near damped Lyman-alpha systems with (z) = 2.6
NASA Technical Reports Server (NTRS)
Wolfe, A. M
1993-01-01
The galaxy two-point correlation function, xi, at (z) = 2.6 is determined by comparing the number of Ly-alpha-emitting galaxies in narrowband CCD fields selected for the presence of damped L-alpha absorption to their number in randomly selected control fields. Comparisons between the presented determination of (xi), a density-weighted volume average of xi, and model predictions for (xi) at large redshifts show that models in which the clustering pattern is fixed in proper coordinates are highly unlikely, while better agreement is obtained if the clustering pattern is fixed in comoving coordinates. Therefore, clustering of Ly-alpha-emitting galaxies around damped Ly-alpha systems at large redshifts is strong. It is concluded that the faint blue galaxies are drawn from a parent population different from normal galaxies, the presumed offspring of damped Ly-alpha systems.
High-dose immunoglobulin during pregnancy for recurrent neonatal haemochromatosis.
Whitington, Peter F; Hibbard, Judith U
Neonatal haemochromatosis is a rare disease of gestation that results in severe fetal liver injury. We hypothesised an alloimmune aetiology for the disorder on the basis of its high recurrence rate in sibships. In this study, we assessed the effectiveness in preventing or changing the severity of recurrent neonatal haemochromatosis of administering during pregnancy high-dose intravenous immunoglobulin (IVIG) derived from pooled serum of multiple donors. Women whose most recent pregnancy ended in documented neonatal haemochromatosis were treated with IVIG, 1 g/kg bodyweight, weekly from the 18th week until the end of gestation in their subsequent pregnancy. The outcomes of treated pregnancies were compared with those of randomly selected previous affected pregnancies for each woman, which were used as historical controls. 15 women were treated through 16 pregnancies. All pregnancies progressed uneventfully and resulted in live babies with normal physical examinations and birthweights that were appropriate for gestational age. 12 babies had evidence of liver involvement with neonatal haemochromatosis: 11 had higher than normal concentrations of serum alpha-fetoprotein and ferritin or serum alpha-fetoprotein alone, including four with coagulopathy (international normalised ratio >1.5), and one had coagulopathy alone. All babies survived with medical or no treatment and were healthy at follow-up within the past 6 months. In analysis on a per-mother basis comparing outcomes of treated gestations with those of randomly selected previous affected gestations, gestational IVIG therapy was associated with better infant survival (15 good outcomes vs two in previous pregnancies; p=0.0009). Treatment with high-dose IVIG during gestation appears to have modified recurrent neonatal haemochromatosis so that it was not lethal to the fetus or neonate. These results further support an alloimmune mechanism for recurrent neonatal haemochromatosis.
Ringed Seal Search for Global Optimization via a Sensitive Search Model.
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Chu, Haitao; Nie, Lei; Cole, Stephen R; Poole, Charles
2009-08-15
In a meta-analysis of diagnostic accuracy studies, the sensitivities and specificities of a diagnostic test may depend on the disease prevalence since the severity and definition of disease may differ from study to study due to the design and the population considered. In this paper, we extend the bivariate nonlinear random effects model on sensitivities and specificities to jointly model the disease prevalence, sensitivities and specificities using trivariate nonlinear random-effects models. Furthermore, as an alternative parameterization, we also propose jointly modeling the test prevalence and the predictive values, which reflect the clinical utility of a diagnostic test. These models allow investigators to study the complex relationship among the disease prevalence, sensitivities and specificities; or among test prevalence and the predictive values, which can reveal hidden information about test performance. We illustrate the proposed two approaches by reanalyzing the data from a meta-analysis of radiological evaluation of lymph node metastases in patients with cervical cancer and a simulation study. The latter illustrates the importance of carefully choosing an appropriate normality assumption for the disease prevalence, sensitivities and specificities, or the test prevalence and the predictive values. In practice, it is recommended to use model selection techniques to identify a best-fitting model for making statistical inference. In summary, the proposed trivariate random effects models are novel and can be very useful in practice for meta-analysis of diagnostic accuracy studies. Copyright 2009 John Wiley & Sons, Ltd.
This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...
Huang, Sui
2012-09-01
Current investigation of cancer progression towards increasing malignancy focuses on the molecular pathways that produce the various cancerous traits of cells. Their acquisition is explained by the somatic mutation theory: tumor progression is the result of a neo-Darwinian evolution in the tissue. Herein cells are the units of selection. Random genetic mutations permanently affecting these pathways create malignant cell phenotypes that are selected for in the disturbed tissue. However, could it be that the capacity of the genome and its gene regulatory network to generate the vast diversity of cell types during development, i.e., to produce inheritable phenotypic changes without mutations, is harnessed by tumorigenesis to propel a directional change towards malignancy? Here we take an encompassing perspective, transcending the orthodoxy of molecular carcinogenesis and review mechanisms of somatic evolution beyond the Neo-Darwinian scheme. We discuss the central concept of "cancer attractors" - the hidden stable states of gene regulatory networks normally not occupied by cells. Noise-induced transitions into such attractors provide a source for randomness (chance) and regulatory constraints (necessity) in the acquisition of novel expression profiles that can be inherited across cell divisions, and hence, can be selected for. But attractors can also be reached in response to environmental signals - thus offering the possibility for inheriting acquired traits that can also be selected for. Therefore, we face the possibility of non-genetic (mutation-independent) equivalents to both Darwinian and Lamarckian evolution which may jointly explain the arrow of change pointing toward increasing malignancy. Copyright © 2012 Elsevier Ltd. All rights reserved.
Osawa, Masaki
2018-01-01
It is difficult to target and kill cancer cells. One possible approach is to mutate bacteria to enhance their binding to cancer cells. In the present study, Gram-negative Escherichia coli and Gram-positive Bacillus subtilis were randomly mutated, and then were positively and negatively selected for binding cancer vs normal cells. With repetitive mutation and selection both bacteria successfully evolved to increase affinity to the pancreatic cancer cell line (Mia PaCa-2) but not normal cells (HPDE: immortalized human pancreatic ductal epithelial cells). The mutant E. coli and B. subtilis strains bound to Mia PaCa-2 cells about 10 and 25 times more than to HPDE cells. The selected E. coli strain had mutations in biofilm-related genes and the regulatory region for a type I pilus gene. Consistent with type I pili involvement, mannose could inhibit the binding to cells. The results suggest that weak but specific binding is involved in the initial step of adhesion. To test their ability to kill Mia PaCa-2 cells, hemolysin was expressed in the mutant strain. The hemolysin released from the mutant strain was active and could kill Mia PaCa-2 cells. In the case of B. subtilis, the initial binding to the cells was a weak interaction of the leading pole of the motile bacteria. The frequency of this interaction to Mia PaCa-2 cells dramatically increased in the evolved mutant strain. This mutant strain could also specifically invade beneath Mia PaCa-2 cells and settle there. This type of mutation/selection strategy may be applicable to other combinations of cancer cells and bacterial species.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Liu, Zhiming; Luo, Jiawei
2017-08-01
Associating protein complexes to human inherited diseases is critical for better understanding of biological processes and functional mechanisms of the disease. Many protein complexes have been identified and functionally annotated by computational and purification methods so far, however, the particular roles they were playing in causing disease have not yet been well determined. In this study, we present a novel method to identify associations between protein complexes and diseases. First, we construct a disease-protein heterogeneous network based on data integration and laplacian normalization. Second, we apply a random walk with restart on heterogeneous network (RWRH) algorithm on this network to quantify the strength of the association between proteins and the query disease. Third, we sum over the scores of member proteins to obtain a summary score for each candidate protein complex, and then rank all candidate protein complexes according to their scores. With a series of leave-one-out cross-validation experiments, we found that our method not only possesses high performance but also demonstrates robustness regarding the parameters and the network structure. We test our approach with breast cancer and select top 20 highly ranked protein complexes, 17 of the selected protein complexes are evidenced to be connected with breast cancer. Our proposed method is effective in identifying disease-related protein complexes based on data integration and laplacian normalization. Copyright © 2017. Published by Elsevier Ltd.
IS THE SUICIDE RATE A RANDOM WALK?
Yang, Bijou; Lester, David; Lyke, Jennifer; Olsen, Robert
2015-06-01
The yearly suicide rates for the period 1933-2010 and the daily suicide numbers for 1990 and 1991 were examined for whether the distribution of difference scores (from year to year and from day to day) fitted a normal distribution, a characteristic of stochastic processes that follow a random walk. If the suicide rate were a random walk, then any disturbance to the suicide rate would have a permanent effect and national suicide prevention efforts would likely fail. The distribution of difference scores from day to day (but not the difference scores from year to year) fitted a normal distribution and, therefore, were consistent with a random walk.
Teh, Seng Khoon; Zheng, Wei; Lau, David P; Huang, Zhiwei
2009-06-01
In this work, we evaluated the diagnostic ability of near-infrared (NIR) Raman spectroscopy associated with the ensemble recursive partitioning algorithm based on random forests for identifying cancer from normal tissue in the larynx. A rapid-acquisition NIR Raman system was utilized for tissue Raman measurements at 785 nm excitation, and 50 human laryngeal tissue specimens (20 normal; 30 malignant tumors) were used for NIR Raman studies. The random forests method was introduced to develop effective diagnostic algorithms for classification of Raman spectra of different laryngeal tissues. High-quality Raman spectra in the range of 800-1800 cm(-1) can be acquired from laryngeal tissue within 5 seconds. Raman spectra differed significantly between normal and malignant laryngeal tissues. Classification results obtained from the random forests algorithm on tissue Raman spectra yielded a diagnostic sensitivity of 88.0% and specificity of 91.4% for laryngeal malignancy identification. The random forests technique also provided variables importance that facilitates correlation of significant Raman spectral features with cancer transformation. This study shows that NIR Raman spectroscopy in conjunction with random forests algorithm has a great potential for the rapid diagnosis and detection of malignant tumors in the larynx.
Normal aging delays and compromises early multifocal visual attention during object tracking.
Störmer, Viola S; Li, Shu-Chen; Heekeren, Hauke R; Lindenberger, Ulman
2013-02-01
Declines in selective attention are one of the sources contributing to age-related impairments in a broad range of cognitive functions. Most previous research on mechanisms underlying older adults' selection deficits has studied the deployment of visual attention to static objects and features. Here we investigate neural correlates of age-related differences in spatial attention to multiple objects as they move. We used a multiple object tracking task, in which younger and older adults were asked to keep track of moving target objects that moved randomly in the visual field among irrelevant distractor objects. By recording the brain's electrophysiological responses during the tracking period, we were able to delineate neural processing for targets and distractors at early stages of visual processing (~100-300 msec). Older adults showed less selective attentional modulation in the early phase of the visual P1 component (100-125 msec) than younger adults, indicating that early selection is compromised in old age. However, with a 25-msec delay relative to younger adults, older adults showed distinct processing of targets (125-150 msec), that is, a delayed yet intact attentional modulation. The magnitude of this delayed attentional modulation was related to tracking performance in older adults. The amplitude of the N1 component (175-210 msec) was smaller in older adults than in younger adults, and the target amplification effect of this component was also smaller in older relative to younger adults. Overall, these results indicate that normal aging affects the efficiency and timing of early visual processing during multiple object tracking.
An algorithm to compute the sequency ordered Walsh transform
NASA Technical Reports Server (NTRS)
Larsen, H.
1976-01-01
A fast sequency-ordered Walsh transform algorithm is presented; this sequency-ordered fast transform is complementary to the sequency-ordered fast Walsh transform introduced by Manz (1972) and eliminating gray code reordering through a modification of the basic fast Hadamard transform structure. The new algorithm retains the advantages of its complement (it is in place and is its own inverse), while differing in having a decimation-in time structure, accepting data in normal order, and returning the coefficients in bit-reversed sequency order. Applications include estimation of Walsh power spectra for a random process, sequency filtering and computing logical autocorrelations, and selective bit reversing.
NASA Astrophysics Data System (ADS)
Hashimoto, Noriaki; Suzuki, Kenji; Liu, Junchi; Hirano, Yasushi; MacMahon, Heber; Kido, Shoji
2018-02-01
Consolidation and ground-glass opacity (GGO) are two major types of opacities associated with diffuse lung diseases. Accurate detection and classification of such opacities are crucially important in the diagnosis of lung diseases, but the process is subjective, and suffers from interobserver variability. Our study purpose was to develop a deep neural network convolution (NNC) system for distinguishing among consolidation, GGO, and normal lung tissue in high-resolution CT (HRCT). We developed ensemble of two deep NNC models, each of which was composed of neural network regression (NNR) with an input layer, a convolution layer, a fully-connected hidden layer, and a fully-connected output layer followed by a thresholding layer. The output layer of each NNC provided a map for the likelihood of being each corresponding lung opacity of interest. The two NNC models in the ensemble were connected in a class-selection layer. We trained our NNC ensemble with pairs of input 2D axial slices and "teaching" probability maps for the corresponding lung opacity, which were obtained by combining three radiologists' annotations. We randomly selected 10 and 40 slices from HRCT scans of 172 patients for each class as a training and test set, respectively. Our NNC ensemble achieved an area under the receiver-operating-characteristic (ROC) curve (AUC) of 0.981 and 0.958 in distinction of consolidation and GGO, respectively, from normal opacity, yielding a classification accuracy of 93.3% among 3 classes. Thus, our deep-NNC-based system for classifying diffuse lung diseases achieved high accuracies for classification of consolidation, GGO, and normal opacity.
Rapid mapping of chromosomal breakpoints: from blood to BAC in 20 days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lu, Chun-Mei; Kwan, Johnson; Weier, Jingly F.
2009-02-25
Structural chromosome aberrations and associated segmental or chromosomal aneusomies are major causes of reproductive failure in humans. Despite the fact that carriers of reciprocal balanced translocation often have no other clinical symptoms or disease, impaired chromosome homologue pairing in meiosis and karyokinesis errors lead to over-representation of translocations carriers in the infertile population and in recurrent pregnancy loss patients. At present, clinicians have no means to select healthy germ cells or balanced zygotes in vivo, but in vitro fertilization (IVF) followed by preimplantation genetic diagnosis (PGD) offers translocation carriers a chance to select balanced or normal embryos for transfer. Althoughmore » a combination of telomeric and centromeric probes can differentiate embryos that are unbalanced from normal or unbalanced ones, a seemingly random position of breakpoints in these IVF-patients poses a serious obstacle to differentiating between normal and balanced embryos, which for most translocation couples, is desirable. Using a carrier with reciprocal translocation t(4;13) as an example, we describe our state-of-the-art approach to the preparation of patient-specific DNA probes that span or 'extent' the breakpoints. With the techniques and resources described here, most breakpoints can be accurately mapped in a matter of days using carrier lymphocytes, and a few extra days are allowed for PGD-probe optimization. The optimized probes will then be suitable for interphase cell analysis, a prerequisite for PGD since blastomeres are biopsied from normally growing day 3 - embryos regardless of their position in the mitotic cell cycle. Furthermore, routine application of these rapid methods should make PGD even more affordable for translocation carriers enrolled in IVF programs.« less
NASA Astrophysics Data System (ADS)
Wang, Huiqin; Wang, Xue; Lynette, Kibe; Cao, Minghua
2018-06-01
The performance of multiple-input multiple-output wireless optical communication systems that adopt Q-ary pulse position modulation over spatial correlated log-normal fading channel is analyzed in terms of its un-coded bit error rate and ergodic channel capacity. The analysis is based on the Wilkinson's method which approximates the distribution of a sum of correlated log-normal random variables to a log-normal random variable. The analytical and simulation results corroborate the increment of correlation coefficients among sub-channels lead to system performance degradation. Moreover, the receiver diversity has better performance in resistance of spatial correlation caused channel fading.
Application of random effects to the study of resource selection by animals
Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.
2006-01-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Weight misperception and psychosocial health in normal weight Chinese adolescents.
Lo, Wing-Sze; Ho, Sai-Yin; Mak, Kwok-Kei; Lai, Hak-Kan; Lai, Yuen-Kwan; Lam, Tai-Hing
2011-06-01
To investigate the association between weight misperception and psychosocial health problems among normal weight Chinese adolescent boys and girls. In the Youth Smoking Survey 2003-04, 20 677 normal weight students aged 11-18 years from 85 randomly selected schools throughout Hong Kong were analysed. Students who perceived themselves as very thin, thin, fat or very fat were classified as having weight misperception in contrast to the reference group who correctly perceived themselves as normal weight. Psychosocial health outcomes included headache, feeling stressful, feeling depressed, poorer appetite, sleepless at night, having nightmares and less confidence in getting along with friends. Logistic regression yielded adjusted odds ratios (ORs) for each outcome by weight misperception in boys and girls separately. In girls, misperceived fatness was associated with all outcomes, while misperceived thinness was associated with poorer appetite and less confidence. Boys who misperceived themselves as very thin or fat had greater odds of all outcomes except having nightmares. In general, greater ORs were observed for misperceived fatness than thinness in girls, but similar ORs were observed in boys. Misperceived thinness and fatness accounted for 0.6% to 45.1% of the psychosocial health problems in adolescents. Normal weight adolescents with weight misperception were more likely to have psychosocial health problems, and the associations were stronger for extreme misperceptions (i.e., very fat or very thin) in both boys and girls.
Baron-Cohen, S; Wheelwright, S; Skinner, R; Martin, J; Clubley, E
2001-02-01
Currently there are no brief, self-administered instruments for measuring the degree to which an adult with normal intelligence has the traits associated with the autistic spectrum. In this paper, we report on a new instrument to assess this: the Autism-Spectrum Quotient (AQ). Individuals score in the range 0-50. Four groups of subjects were assessed: Group 1: 58 adults with Asperger syndrome (AS) or high-functioning autism (HFA); Group 2: 174 randomly selected controls. Group 3: 840 students in Cambridge University; and Group 4: 16 winners of the UK Mathematics Olympiad. The adults with AS/HFA had a mean AQ score of 35.8 (SD = 6.5), significantly higher than Group 2 controls (M = 16.4, SD = 6.3). 80% of the adults with AS/HFA scored 32+, versus 2% of controls. Among the controls, men scored slightly but significantly higher than women. No women scored extremely highly (AQ score 34+) whereas 4% of men did so. Twice as many men (40%) as women (21%) scored at intermediate levels (AQ score 20+). Among the AS/HFA group, male and female scores did not differ significantly. The students in Cambridge University did not differ from the randomly selected control group, but scientists (including mathematicians) scored significantly higher than both humanities and social sciences students, confirming an earlier study that autistic conditions are associated with scientific skills. Within the sciences, mathematicians scored highest. This was replicated in Group 4, the Mathematics Olympiad winners scoring significantly higher than the male Cambridge humanities students. 6% of the student sample scored 32+ on the AQ. On interview, 11 out of 11 of these met three or more DSM-IV criteria for AS/HFA, and all were studying sciences/mathematics, and 7 of the 11 met threshold on these criteria. Test-retest and interrater reliability of the AQ was good. The AQ is thus a valuable instrument for rapidly quantifying where any given individual is situated on the continuum from autism to normality. Its potential for screening for autism spectrum conditions in adults of normal intelligence remains to be fully explored.
Stratton, Gareth; Ridgers, Nicola D; Fairclough, Stuart J; Richardson, David J
2007-06-01
This study aimed to compare moderate-to-vigorous physical activity (MVPA) and vigorous physical activity (VPA) in normal-weight and overweight boys and girls during school recess. Four hundred twenty children, age 6 to 10 years, were randomly selected from 25 schools in England. Three hundred seventy-seven children completed the study. BMI was calculated from height and weight measurements, and heart rate reserve thresholds of 50% and 75% reflected children's engagement in MVPA and VPA, respectively. There was a significant main effect for sex and a significant interaction between BMI category and sex for the percent of recess time spent in MVPA and VPA. Normal-weight girls were the least active group, compared with overweight boys and girls who were equally active. Fifty-one boys and 24 girls of normal weight achieved the 40% threshold; of these, 30 boys and 10 girls exceeded 50% of recess time in MVPA. Eighteen overweight boys and 22 overweight girls exceeded the 40% threshold, whereas 8 boys and 8 girls exceeded the 50% threshold. Overweight boys were significantly less active than their normal-weight male counterparts; this difference did not hold true for girls. Even though nearly double the number of normal-weight children achieved the 40% of MVPA during recess compared with overweight children, physical activity promotion in school playgrounds needs to be targeted not only at overweight but at other health parameters, as 40 overweight children met the 40% MVPA target proposed for recess.
Endocidal Regulation of Secondary Metabolites in the Producing Organisms
Li, Shiyou; Wang, Ping; Yuan, Wei; Su, Zushang; Bullard, Steven H.
2016-01-01
Secondary metabolites are defined as organic compounds that are not directly involved in the normal growth, development, and reproduction of an organism. They are widely believed to be responsible for interactions between the producing organism and its environment, with the producer avoiding their toxicities. In our experiments, however, none of the randomly selected 44 species representing different groups of plants and insects can avoid autotoxicity by its endogenous metabolites once made available. We coined the term endocides (endogenous biocides) to describe such metabolites that can poison or inhibit the parent via induced biosynthesis or external applications. Dosage-dependent endocides can selectively induce morphological mutations in the parent organism (e.g., shrubbiness/dwarfism, pleiocotyly, abnormal leaf morphogenesis, disturbed phyllotaxis, fasciated stems, and variegation in plants), inhibit its growth, development, and reproduction and cause death than non-closely related species. The propagule, as well as the organism itself contains or produces adequate endocides to kill itself. PMID:27389069
Wang, Yongming; Lin, Xiuyun; Dong, Bo; Wang, Yingdian; Liu, Bao
2004-01-01
RAPD (randomly amplified polymorphic DNA) and ISSR (inter-simple sequence repeat) fingerprinting on HpaII/MspI-digested genomic DNA of nine elite japonica rice cultivars implies inter-cultivar DNA methylation polymorphism. Using both DNA fragments isolated from RAPD or ISSR gels and selected low-copy sequences as probes, methylation-sensitive Southern blot analysis confirms the existence of extensive DNA methylation polymorphism in both genes and DNA repeats among the rice cultivars. The cultivar-specific methylation patterns are stably maintained, and can be used as reliable molecular markers. Transcriptional analysis of four selected sequences (RdRP, AC9, HSP90 and MMR) on leaves and roots from normal and 5-azacytidine-treated seedlings of three representative cultivars shows an association between the transcriptional activity of one of the genes, the mismatch repair (MMR) gene, and its CG methylation patterns.
Meyring, M; Chankvetadze, B; Blaschke, G
1999-09-01
The separation of thalidomide (TD) and its hydroxylated metabolites including their simultaneous enantioseparation was studied in capillary electrophoresis (CE) using four different randomly substituted charged cyclodextrin (CD) derivatives, the combinations of some of them with each other, and beta-CD. TD, as well as two metabolites recently found in incubations of human liver microsomes and human blood, 5-hydroxythalidomide (5-OH-TD) and one of the diastereomeric 5'-hydroxythalidomides (5'-OH-TD), are neutral compounds. Therefore, they were resolved using charged chiral selectors in CE. Two different separation modes (normal polarity and carrier mode) and two different capillaries (fused-silica and polyacrylamide-coated) were tested. Based on the behavior of the individual CDs, their designed combinations were selected in order to improve the separation selectivity and enantioselectivity. Under optimized conditions all three chiral compounds and their enantiomers were resolved simultaneously.
NASA Astrophysics Data System (ADS)
Sung, S.; Kim, H. G.; Lee, D. K.; Park, J. H.; Mo, Y.; Kil, S.; Park, C.
2016-12-01
The impact of climate change has been observed throughout the globe. The ecosystem experiences rapid changes such as vegetation shift, species extinction. In these context, Species Distribution Model (SDM) is one of the popular method to project impact of climate change on the ecosystem. SDM basically based on the niche of certain species with means to run SDM present point data is essential to find biological niche of species. To run SDM for plants, there are certain considerations on the characteristics of vegetation. Normally, to make vegetation data in large area, remote sensing techniques are used. In other words, the exact point of presence data has high uncertainties as we select presence data set from polygons and raster dataset. Thus, sampling methods for modeling vegetation presence data should be carefully selected. In this study, we used three different sampling methods for selection of presence data of vegetation: Random sampling, Stratified sampling and Site index based sampling. We used one of the R package BIOMOD2 to access uncertainty from modeling. At the same time, we included BioCLIM variables and other environmental variables as input data. As a result of this study, despite of differences among the 10 SDMs, the sampling methods showed differences in ROC values, random sampling methods showed the lowest ROC value while site index based sampling methods showed the highest ROC value. As a result of this study the uncertainties from presence data sampling methods and SDM can be quantified.
Sorensen, Julie A; May, John; Ostby-Malling, Ronne; Lehmen, Tom; Strand, John; Stenlund, Hans; Weinehall, Lars; Einehall, Lars W; Emmelin, Maria
2008-11-01
Increasing the percentage of rollover protective structure (ROPS) equipped tractors has been the focus of many agricultural safety campaigns. Traditionally efforts have attempted to persuade farmers through education or community awareness interventions. These efforts have lead to marginal change. In response, a social marketing approach was tested as a means for increasing interest in ROPS retrofitting in New York. An initial phone survey was conducted with a random sample of New York farmers to identify a potential target population. Following target selection, in-depth interviews were conducted to isolate barriers and motivators to retrofitting. This information was used to develop message prototypes which were tested in small focus group discussions. Selected and revised messages, as well as various other incentives developed in response to feedback from interviews, were then tested in a prospective, quasi-randomized controlled trial. Small crop and livestock farms were selected as the intervention target since they represent 86% of New York farms with none or only one ROPS protected tractor. Barriers to retrofitting which were identified in interviews were: 1) constant exposures normalize risk, 2) risk is modeled by significant others and 3) safety in general and retrofitting in particular requires too much time and money. The piloting of ROPS incentives led to a marked increase in ROPS sales in New York. Social Marketing provides a promising framework for the design of agricultural injury prevention programs. The potential implications for other health initiatives seeking to promote behaviour change are also discussed.
Ueberall, Michael A; Mueller-Schwefe, Gerhard H H
2016-01-01
To evaluate the benefit-risk profile (BRP) of oxycodone/naloxone (OXN) and tapentadol (TAP) in patients with chronic low back pain (cLBP) with a neuropathic component (NC) in routine clinical practice. This was a blinded end point analysis of randomly selected 12-week routine/open-label data of the German Pain Registry on adult patients with cLBP-NC who initiated an index treatment in compliance with the current German prescribing information between 1st January and 31st October 2015 (OXN/TAP, n=128/133). Primary end point was defined as a composite of three efficacy components (≥30% improvement of pain, pain-related disability, and quality of life each at the end of observation vs baseline) and three tolerability components (normal bowel function, absence of either central nervous system side effects, and treatment-emergent adverse event [TEAE]-related treatment discontinuation during the observation period) adopted to reflect BRP assessments under real-life conditions. Demographic as well as baseline and pretreatment characteristics were comparable for the randomly selected data sets of both index groups without any indicators for critical selection biases. Treatment with OXN resulted formally in a BRP noninferior to that of TAP and showed a significantly higher primary end point response vs TAP (39.8% vs 25.6%, odds ratio: 1.93; P =0.014), due to superior analgesic effects. Between-group differences increased with stricter response definitions for all three efficacy components in favor of OXN: ≥30%/≥50%/≥70% response rates for OXN vs TAP were seen for pain intensity in 85.2%/67.2%/39.1% vs 83.5%/54.1%/15.8% ( P = ns/0.031/<0.001), for pain-related disability in 78.1%/64.8%/43.8% vs 66.9%/50.4%/24.8% ( P =0.043/0.018/0.001), and for quality of life in 76.6%/68.0%/50.0% vs 63.9%/54.1%/34.6% ( P =0.026/0.022/0.017). Overall, OXN vs TAP treatments were well tolerated, and proportions of patients who either maintained a normal bowel function (68.0% vs 72.2%), reported no central nervous system side effects (91.4% vs 89.5%), or completed the 12-week evaluation period without any TEAE-related treatment discontinuations (93.0% vs 92.5%) were similar for both index medications ( P = ns for each comparison). In daily practice, the BRP of OXN proved to be noninferior to that of TAP in patients with cLBP-NC, but showed a superior efficacy if stricter analgesic response definitions were evaluated.
Ueberall, Michael A; Mueller-Schwefe, Gerhard H H
2016-01-01
Objective To evaluate the benefit–risk profile (BRP) of oxycodone/naloxone (OXN) and tapentadol (TAP) in patients with chronic low back pain (cLBP) with a neuropathic component (NC) in routine clinical practice. Methods This was a blinded end point analysis of randomly selected 12-week routine/open-label data of the German Pain Registry on adult patients with cLBP-NC who initiated an index treatment in compliance with the current German prescribing information between 1st January and 31st October 2015 (OXN/TAP, n=128/133). Primary end point was defined as a composite of three efficacy components (≥30% improvement of pain, pain-related disability, and quality of life each at the end of observation vs baseline) and three tolerability components (normal bowel function, absence of either central nervous system side effects, and treatment-emergent adverse event [TEAE]-related treatment discontinuation during the observation period) adopted to reflect BRP assessments under real-life conditions. Results Demographic as well as baseline and pretreatment characteristics were comparable for the randomly selected data sets of both index groups without any indicators for critical selection biases. Treatment with OXN resulted formally in a BRP noninferior to that of TAP and showed a significantly higher primary end point response vs TAP (39.8% vs 25.6%, odds ratio: 1.93; P=0.014), due to superior analgesic effects. Between-group differences increased with stricter response definitions for all three efficacy components in favor of OXN: ≥30%/≥50%/≥70% response rates for OXN vs TAP were seen for pain intensity in 85.2%/67.2%/39.1% vs 83.5%/54.1%/15.8% (P= ns/0.031/<0.001), for pain-related disability in 78.1%/64.8%/43.8% vs 66.9%/50.4%/24.8% (P=0.043/0.018/0.001), and for quality of life in 76.6%/68.0%/50.0% vs 63.9%/54.1%/34.6% (P=0.026/0.022/0.017). Overall, OXN vs TAP treatments were well tolerated, and proportions of patients who either maintained a normal bowel function (68.0% vs 72.2%), reported no central nervous system side effects (91.4% vs 89.5%), or completed the 12-week evaluation period without any TEAE-related treatment discontinuations (93.0% vs 92.5%) were similar for both index medications (P= ns for each comparison). Conclusion In daily practice, the BRP of OXN proved to be noninferior to that of TAP in patients with cLBP-NC, but showed a superior efficacy if stricter analgesic response definitions were evaluated. PMID:27881925
An Analysis of Depression, Self-Harm, and Suicidal Ideation Content on Tumblr.
Cavazos-Rehg, Patricia A; Krauss, Melissa J; Sowles, Shaina J; Connolly, Sarah; Rosas, Carlos; Bharadwaj, Meghana; Grucza, Richard; Bierut, Laura J
2017-01-01
Social networking about depression can be indicative of self-reported depression and/or can normalize risk behaviors such as self-harm and suicidal ideation. To gain a better understanding of the depression, self-harm, and suicidal content that is being shared on Tumblr. From April 16 to May 10, 2014, 17 popular depression-related Tumblr accounts were monitored for new posts and engagement with other Tumblr users. A total of 3,360 posts were randomly selected from all historical posts from these accounts and coded based on themes ascertained by the research team. The 17 Tumblr accounts posted a median number of 185 posts (range = 0-2,954). Content was engaged with (i.e., re-blogged or liked) a median number of 1,677,362 times (range = 0-122,186,504). Of the 3,360 randomly selected posts, 2,739 (82%) were related to depression, suicide, or self-harm. Common themes were self-loathing (412, 15%), loneliness/feeling unloved (405, 15%), self-harm (407, 15%), and suicide (372, 14%). This study takes an important first step at better understanding the displayed depression-related references on Tumblr. The findings signal a need for suicide prevention efforts to intervene on Tumblr and use this platform in a strategic way, given the depression and suicidal content that was readily observed on Tumblr.
Hutcheon, Jennifer A; Platt, Robert W; Abrams, Barbara; Himes, Katherine P; Simhan, Hyagriv N; Bodnar, Lisa M
2013-05-01
To establish the unbiased relation between maternal weight gain in pregnancy and perinatal health, a classification for maternal weight gain is needed that is uncorrelated with gestational age. The goal of this study was to create a weight-gain-for-gestational-age percentile and z score chart to describe the mean, SD, and selected percentiles of maternal weight gain throughout pregnancy in a contemporary cohort of US women. The study population was drawn from normal-weight women with uncomplicated, singleton pregnancies who delivered at the Magee-Womens Hospital in Pittsburgh, PA, 1998-2008. Analyses were based on a randomly selected subset of 648 women for whom serial prenatal weight measurements were available through medical chart record abstraction (6727 weight measurements). The pattern of maternal weight gain throughout gestation was estimated by using a random-effects regression model. The estimates were used to create a chart with the smoothed means, percentiles, and SDs of gestational weight gain for each week of pregnancy. This chart allows researchers to express total weight gain as an age-standardized z score, which can be used in epidemiologic analyses to study the association between pregnancy weight gain and adverse or physiologic pregnancy outcomes independent of gestational age.
Curvature correction of retinal OCTs using graph-based geometry detection
NASA Astrophysics Data System (ADS)
Kafieh, Raheleh; Rabbani, Hossein; Abramoff, Michael D.; Sonka, Milan
2013-05-01
In this paper, we present a new algorithm as an enhancement and preprocessing step for acquired optical coherence tomography (OCT) images of the retina. The proposed method is composed of two steps, first of which is a denoising algorithm with wavelet diffusion based on a circular symmetric Laplacian model, and the second part can be described in terms of graph-based geometry detection and curvature correction according to the hyper-reflective complex layer in the retina. The proposed denoising algorithm showed an improvement of contrast-to-noise ratio from 0.89 to 1.49 and an increase of signal-to-noise ratio (OCT image SNR) from 18.27 to 30.43 dB. By applying the proposed method for estimation of the interpolated curve using a full automatic method, the mean ± SD unsigned border positioning error was calculated for normal and abnormal cases. The error values of 2.19 ± 1.25 and 8.53 ± 3.76 µm were detected for 200 randomly selected slices without pathological curvature and 50 randomly selected slices with pathological curvature, respectively. The important aspect of this algorithm is its ability in detection of curvature in strongly pathological images that surpasses previously introduced methods; the method is also fast, compared to the relatively low speed of similar methods.
Qinna, Nidal A; Badwan, Adnan A
2015-01-01
Streptozotocin (STZ) is currently the most used diabetogenic agent in testing insulin and new antidiabetic drugs in animals. Due to the toxic and disruptive nature of STZ on organs, apart from pancreas, involved in preserving the body’s normal glucose homeostasis, this study aims to reassess the action of STZ in inducing different glucose response states in diabetic rats while testing insulin. Diabetic Sprague-Dawley rats induced with STZ were classified according to their initial blood glucose levels into stages. The effect of randomizing rats in such a manner was investigated for the severity of interrupting normal liver, pancreas, and kidney functions. Pharmacokinetic and pharmacodynamic actions of subcutaneously injected insulin in diabetic and nondiabetic rats were compared. Interruption of glucose homeostasis by STZ was challenged by single and repeated administrations of injected insulin and oral glucose to diabetic rats. In diabetic rats with high glucose (451–750 mg/dL), noticeable changes were seen in the liver and kidney functions compared to rats with lower basal glucose levels. Increased serum levels of recombinant human insulin were clearly indicated by a significant increase in the calculated maximum serum concentration and area under the concentration–time curve. Reversion of serum glucose levels to normal levels pre- and postinsulin and oral glucose administrations to STZ diabetic rats were found to be variable. In conclusion, diabetic animals were more responsive to insulin than nondiabetic animals. STZ was capable of inducing different levels of normal glucose homeostasis disruption in rats. Both pharmacokinetic and pharmacodynamic actions of insulin were altered when different initial blood glucose levels of STZ diabetic rats were selected for testing. Such findings emphasize the importance of selecting predefined and unified glucose levels when using STZ as a diabetogenic agent in experimental protocols evaluating new antidiabetic agents and insulin delivery systems. PMID:26005328
SU-E-T-647: Quality Assurance of VMAT by Gamma Analysis Dependence On Low-Dose Threshold
DOE Office of Scientific and Technical Information (OSTI.GOV)
Song, J; Kim, M; Lee, S
2015-06-15
Purpose: The AAPM TG-119 instructed institutions to use low-dose threshold (LDT) of 10% or a ROI determined by the jaw when they collected gamma analysis QA data of planar dose distribution. Also, based on a survey by Nelms and Simon, more than 70% of institutions use a LDT between 0% and 10% for gamma analysis. However, there are no clinical data to quantitatively demonstrate the impact of the LDT on the gamma index. Therefore, we performed a gamma analysis with LDTs of 0% to 15% according to both global and local normalization and different acceptance criteria: 3%/3 mm, 2%/2 mm,more » and 1%/1 mm. Methods: A total of 30 treatment plans—10 head and neck, 10 brain, and 10 prostate cancer cases—were randomly selected from the Varian Eclipse TPS, retrospectively. For the gamma analysis, a predicted portal image was acquired through a portal dose calculation algorithm in the Eclipse TPS, and a measured portal image was obtained using a Varian Clinac iX and an EPID. Then, the gamma analysis was performed using the Portal Dosimetry software. Results: For the global normalization, the gamma passing rate (%GP) decreased as the LDT increased, and all cases of low-dose thresholds exhibited a %GP above 95% for both the 3%/3 mm and 2%/2 mm criteria. However, for local normalization, the %GP increased as LDT increased. The gamma passing rate with LDT of 10% increased by 6.86%, 9.22% and 6.14% compared with the 0% in the case of the head and neck, brain and prostate for 3%/3 mm criteria, respectively. Conclusion: Applying the LDT in the global normalization does not have critical impact to judge patient-specific QA results. However, LDT for the local normalization should be carefully selected because applying the LDT could affect the average of the %GP to increase rapidly.« less
Khosla, Amrit; Maini, Anuj Paul; Wangoo, Anuj; Singh, Sukhman; Mehar, Damanpreet Kaur
2017-01-01
The success of a restoration is dependent on accurate shade matching of teeth leading to studies evaluating the factors affecting the perception of shades. Colour vision anomalies including colour blindness have been found to exist in the population and it has been thought to be a potential factor affecting the colour perception ability. The present study was done to evaluate the prevalence of colour vision anomalies and its effect on matching of shades of teeth. A total of 147 dental professionals were randomly selected for the study and were first tested for visual acuity using the Snellen's Eye Chart so as to carry on the study with only those operators who had a vision of 6/6. Then, the Ishihara's colour charts were used to test the operators for colour vision handicap. In the last stage of the study, test for accuracy of shade selection was done using the Vitapan Classical shade guide. The shade guide tabs were covered to avoid bias. Percentage was used to calculate the prevalence of colour vision handicap and its effect on matching of shades of teeth as compared to normal vision, which was evaluated using Chi square test. Nineteen operators had colour vision anomalies out of hundred operators and only two operators presented with colour blindness. Colour vision anomaly was more prevalent than colour blindness and it was also found that it was more prevalent in males than females. The difference between the accuracy of shade matching between the operators with normal vision and colour vision defect and operators with normal vision and colour blindness was statistically not significant. Colour blindness and colour vision handicap are rare conditions, with the latter being more common in the population. According to our study, it was concluded that no statistically significant difference existed amongst the operators with normal vision and colour vision anomaly or operators with normal vision and colour blindness during the matching of shades of teeth.
Pan, Xiaoyong; Hu, Xiaohua; Zhang, Yu Hang; Feng, Kaiyan; Wang, Shao Peng; Chen, Lei; Huang, Tao; Cai, Yu Dong
2018-04-12
Atrioventricular septal defect (AVSD) is a clinically significant subtype of congenital heart disease (CHD) that severely influences the health of babies during birth and is associated with Down syndrome (DS). Thus, exploring the differences in functional genes in DS samples with and without AVSD is a critical way to investigate the complex association between AVSD and DS. In this study, we present a computational method to distinguish DS patients with AVSD from those without AVSD using the newly proposed self-normalizing neural network (SNN). First, each patient was encoded by using the copy number of probes on chromosome 21. The encoded features were ranked by the reliable Monte Carlo feature selection (MCFS) method to obtain a ranked feature list. Based on this feature list, we used a two-stage incremental feature selection to construct two series of feature subsets and applied SNNs to build classifiers to identify optimal features. Results show that 2737 optimal features were obtained, and the corresponding optimal SNN classifier constructed on optimal features yielded a Matthew's correlation coefficient (MCC) value of 0.748. For comparison, random forest was also used to build classifiers and uncover optimal features. This method received an optimal MCC value of 0.582 when top 132 features were utilized. Finally, we analyzed some key features derived from the optimal features in SNNs found in literature support to further reveal their essential roles.
Genomic selection for slaughter age in pigs using the Cox frailty model.
Santos, V S; Martins Filho, S; Resende, M D V; Azevedo, C F; Lopes, P S; Guimarães, S E F; Glória, L S; Silva, F F
2015-10-19
The aim of this study was to compare genomic selection methodologies using a linear mixed model and the Cox survival model. We used data from an F2 population of pigs, in which the response variable was the time in days from birth to the culling of the animal and the covariates were 238 markers [237 single nucleotide polymorphism (SNP) plus the halothane gene]. The data were corrected for fixed effects, and the accuracy of the method was determined based on the correlation of the ranks of predicted genomic breeding values (GBVs) in both models with the corrected phenotypic values. The analysis was repeated with a subset of SNP markers with largest absolute effects. The results were in agreement with the GBV prediction and the estimation of marker effects for both models for uncensored data and for normality. However, when considering censored data, the Cox model with a normal random effect (S1) was more appropriate. Since there was no agreement between the linear mixed model and the imputed data (L2) for the prediction of genomic values and the estimation of marker effects, the model S1 was considered superior as it took into account the latent variable and the censored data. Marker selection increased correlations between the ranks of predicted GBVs by the linear and Cox frailty models and the corrected phenotypic values, and 120 markers were required to increase the predictive ability for the characteristic analyzed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Antonysamy, A.A., E-mail: alphons.antonysamy@GKNAerospace.com; Meyer, J., E-mail: jonathan.meyer@eads.com; Prangnell, P.B., E-mail: philip.prangnell@manchester.ac.uk
With titanium alloys, the solidification conditions in Additive Manufacturing (AM) frequently lead to coarse columnar β-grain structures. The effect of geometry on the variability in the grain structure and texture, seen in Ti-6Al-4V alloy components produced by Selective Electron Beam Melting (SEBM), has been investigated. Reconstruction of the primary β-phase, from α-phase EBSD data, has confirmed that in bulk sections where in-fill “hatching” is employed growth selection favours columnar grains aligned with an <001> {sub β} direction normal to the deposited powder layers; this results in a coarse β-grain structure with a strong < 001 > {sub β} fibre texturemore » (up 8 x random) that can oscillate between a near random distribution around the fibre axis and cube reinforcement with build height. It is proposed that this behaviour is related to the highly elongated melt pool and the raster directions alternating between two orthogonal directions every layer, which on average favours grains with cube alignment. In contrast, the outline, or “contour”, pass produces a distinctly different grain structure and texture resulting in a skin layer on wall surfaces, where nucleation occurs off the surrounding powder and growth follows the curved surface of the melt pool. This structure becomes increasingly important in thin sections. Local heterogeneities have also been found within different section transitions, resulting from the growth of skin grain structures into thicker sections. Texture simulations have shown that the far weaker α-texture (∼ 3 x random), seen in the final product, arises from transformation on cooling occurring with a near random distribution of α-plates across the 12 variants possible from the Burgers relationship. - Highlights: • Distinctly different skin and bulk structures are produced by the contour and hatching passes. • Bulk sections contain coarse β-grains with a < 001 > fibre texture in the build direction. • This oscillates between a random distribution around the axis and cube reinforcement. • In the skin layer nucleation occurs off the surrounding powder bed and growth occurs inwards. • Simulations show that a weak α-texture results from a random distribution across habit variants.« less
da Costa Monini, André; Júnior, Luiz Gonzaga Gandini; Martins, Renato Parsekian; Vianna, Alexandre Protásio
2014-09-01
To evaluate the velocity of canine retraction, anchorage loss and changes on canine and first molar inclinations using self-ligating and conventional brackets. Twenty-five adults with Class I malocclusion and a treatment plan involving extractions of four first premolars were selected for this randomized split-mouth control trial. Patients had either conventional or self-ligating brackets bonded to maxillary canines randomly. Retraction was accomplished using 100-g nickel-titanium closed coil springs, which were reactivated every 4 weeks. Oblique radiographs were taken before and after canine retraction was completed, and the cephalograms were superimposed on stable structures of the maxilla. Cephalometric points were digitized twice by a blinded operator for error control, and the following landmarks were collected: canine cusp and apex horizontal changes, molar cusp and apex horizontal changes, and angulation changes in canines and molars. The blinded data, which were normally distributed, were analyzed through paired t-tests for group differences. No differences were found between the two groups for all variables tested. Both brackets showed the same velocity of canine retraction and loss of anteroposterior anchorage of the molars. No changes were found between brackets regarding the inclination of canines and first molars.
Analysis of landslide hazard area in Ludian earthquake based on Random Forests
NASA Astrophysics Data System (ADS)
Xie, J.-C.; Liu, R.; Li, H.-W.; Lai, Z.-L.
2015-04-01
With the development of machine learning theory, more and more algorithms are evaluated for seismic landslides. After the Ludian earthquake, the research team combine with the special geological structure in Ludian area and the seismic filed exploration results, selecting SLOPE(PODU); River distance(HL); Fault distance(DC); Seismic Intensity(LD) and Digital Elevation Model(DEM), the normalized difference vegetation index(NDVI) which based on remote sensing images as evaluation factors. But the relationships among these factors are fuzzy, there also exists heavy noise and high-dimensional, we introduce the random forest algorithm to tolerate these difficulties and get the evaluation result of Ludian landslide areas, in order to verify the accuracy of the result, using the ROC graphs for the result evaluation standard, AUC covers an area of 0.918, meanwhile, the random forest's generalization error rate decreases with the increase of the classification tree to the ideal 0.08 by using Out Of Bag(OOB) Estimation. Studying the final landslides inversion results, paper comes to a statistical conclusion that near 80% of the whole landslides and dilapidations are in areas with high susceptibility and moderate susceptibility, showing the forecast results are reasonable and adopted.
Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George
2017-09-01
Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.
Informational masking and musical training
NASA Astrophysics Data System (ADS)
Oxenham, Andrew J.; Fligor, Brian J.; Mason, Christine R.; Kidd, Gerald
2003-09-01
The relationship between musical training and informational masking was studied for 24 young adult listeners with normal hearing. The listeners were divided into two groups based on musical training. In one group, the listeners had little or no musical training; the other group was comprised of highly trained, currently active musicians. The hypothesis was that musicians may be less susceptible to informational masking, which is thought to reflect central, rather than peripheral, limitations on the processing of sound. Masked thresholds were measured in two conditions, similar to those used by Kidd et al. [J. Acoust. Soc. Am. 95, 3475-3480 (1994)]. In both conditions the signal was comprised of a series of repeated tone bursts at 1 kHz. The masker was comprised of a series of multitone bursts, gated with the signal. In one condition the frequencies of the masker were selected randomly for each burst; in the other condition the masker frequencies were selected randomly for the first burst of each interval and then remained constant throughout the interval. The difference in thresholds between the two conditions was taken as a measure of informational masking. Frequency selectivity, using the notched-noise method, was also estimated in the two groups. The results showed no difference in frequency selectivity between the two groups, but showed a large and significant difference in the amount of informational masking between musically trained and untrained listeners. This informational masking task, which requires no knowledge specific to musical training (such as note or interval names) and is generally not susceptible to systematic short- or medium-term training effects, may provide a basis for further studies of analytic listening abilities in different populations.
Katayev, Alexander; Zebelman, Arthur M; Sharp, Thomas M; Samantha Flynn; Bernstein, Richard K
2017-04-01
Isolated non-albumin proteinuria (NAP) is a condition when urine total protein concentrations are elevated without elevation of urine albumin. The prevalence of NAP in the US population tested for both, urine total protein and albumin was assessed in this study. The database of a US nationwide laboratory network was queried for test results when random urine albumin was ordered together with urine total protein and also when timed 24-hour urine albumin was ordered together with urine total protein. The total prevalence of NAP in the US population tested for both, urine total protein and albumin was calculated for patient groups having normal and low-normal urine albumin (random and timed) with elevated and severely increased urine total protein (random and timed). Also, the prevalence of NAP was calculated for patients with normal urine albumin to assess the probability of missing proteinuria if only urine albumin is measured. The prevalence of NAP in the random samples group was 10.1% (15.2% for females and 4.7% for males). Among patients with normal random albumin, there were 20.0% (27.3% of females and 10.7% of males) patients with NAP. The prevalence of NAP in the timed samples group was 24.6% (29.8% for females and 18.5% for males). Among patients with normal timed urine albumin, there were 36.2% (40.0% of females and 30.8% of males) patients with NAP. There was a strong positive association with female gender and NAP in most patients groups. Testing for only urine (micro)albumin can miss up to 40% of females and 30.8% of males with gross proteinuria. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
NASA Astrophysics Data System (ADS)
Ebrahimi, R.; Zohren, S.
2018-03-01
In this paper we extend the orthogonal polynomials approach for extreme value calculations of Hermitian random matrices, developed by Nadal and Majumdar (J. Stat. Mech. P04001 arXiv:1102.0738), to normal random matrices and 2D Coulomb gases in general. Firstly, we show that this approach provides an alternative derivation of results in the literature. More precisely, we show convergence of the rescaled eigenvalue with largest modulus of a normal Gaussian ensemble to a Gumbel distribution, as well as universality for an arbitrary radially symmetric potential. Secondly, it is shown that this approach can be generalised to obtain convergence of the eigenvalue with smallest modulus and its universality for ring distributions. Most interestingly, the here presented techniques are used to compute all slowly varying finite N correction of the above distributions, which is important for practical applications, given the slow convergence. Another interesting aspect of this work is the fact that we can use standard techniques from Hermitian random matrices to obtain the extreme value statistics of non-Hermitian random matrices resembling the large N expansion used in context of the double scaling limit of Hermitian matrix models in string theory.
The Quest for Evidence for Proton Therapy: Model-Based Approach and Precision Medicine
DOE Office of Scientific and Technical Information (OSTI.GOV)
Widder, Joachim, E-mail: j.widder@umcg.nl; Schaaf, Arjen van der; Lambin, Philippe
Purpose: Reducing dose to normal tissues is the advantage of protons versus photons. We aimed to describe a method for translating this reduction into a clinically relevant benefit. Methods and Materials: Dutch scientific and health care governance bodies have recently issued landmark reports regarding generation of relevant evidence for new technologies in health care including proton therapy. An approach based on normal tissue complication probability (NTCP) models has been adopted to select patients who are most likely to experience fewer (serious) adverse events achievable by state-of-the-art proton treatment. Results: By analogy with biologically targeted therapies, the technology needs to be testedmore » in enriched cohorts of patients exhibiting the decisive predictive marker: difference in normal tissue dosimetric signatures between proton and photon treatment plans. Expected clinical benefit is then estimated by virtue of multifactorial NTCP models. In this sense, high-tech radiation therapy falls under precision medicine. As a consequence, randomizing nonenriched populations between photons and protons is predictably inefficient and likely to produce confusing results. Conclusions: Validating NTCP models in appropriately composed cohorts treated with protons should be the primary research agenda leading to urgently needed evidence for proton therapy.« less
Chen, Jiang-Ming; Geng, Wei; Xie, Sheng-Xue; Liu, Fu-Bao; Zhao, Yi-Jun; Yu, Li-Quan; Geng, Xiao-Ping
2015-01-01
The aim of this article was to compare the advantages and disadvantages of single-incision laparoscopic appendectomy (SILA) and conventional three-port laparoscopic appendectomy (CTLA). A meta-analysis was performed by analyzing all randomized controlled trials (RCTs) published in English that compared SILA and CTLA for appendicitis in adults and children. These studies compared these two methods from different angles including outcomes of interest, patient characteristics, operative time, pain visual analogue scales scores (VAS scores), length of hospital stay, time to return to full activity, resumption of diet, postoperative complications and cosmetic results The risk ratios (RR) and mean difference (MD) with 95% confidence intervals (CIs) were employed to assess the outcome. Seven recent RCTs encompassing 1170 patients (586 SILA and 584 CTLA cases) were included in this meta-analysis. The pooled results demonstrated that conversion rate, drain inserted, reoperation, length of hospital stay, resumption of normal diet and postoperative complications were statistically comparable between the two groups. The postoperative abdominal pain within 24 h was -0.57 in favor of the SILA technique (p = 0.05). Compared with CTLA, SILA showed a better cosmetic satisfaction score (SMD, 0.58; 95% CI, 0.32-0.83; p < 0.0001) and shorter time to recover normal activity (WMD, -0.69; 95% CI, -1.11-0.26; p = 0.001). However, SILA has a longer operative time (WMD, 5.38; 95% CI, 2.94-7.83; p < 0.0001). In selected patients, SILA was confirmed to be as safe and effective as CTLA. Despite the longer operative time, SILA has higher cosmetic satisfaction and shorter recovery time to normal activity. Due to the limitations of the available data, further research is needed.
[Differential expression genes of bone tissues surrounding implants in diabetic rats by gene chip].
Wang, Xin-xin; Ma, Yue; Li, Qing; Jiang, Bao-qi; Lan, Jing
2012-10-01
To compare mRNA expression profiles of bone tissues surrounding implants between normal rats and rats with diabetes using microarray technology. Six Wistar rats were randomly selected and divided into normal model group and diabetic group. Diabetic model condition was established by injecting Streptozotocin into peritoneal space. Titanium implants were implanted into the epiphyseal end of the rats' tibia. Bone tissues surrounding implant were harvested and sampled after 3 months to perform comprehensive RNA gene expression profiling, including 17983 for genome-wide association study.GO analysis was used to compare different gene expression and real-time PCR was used to confirm the results on core samples. The results indicated that there were 1084 differential gene expression. In the diabetic model, there were 352 enhanced expression genes, 732 suppressed expression genes. GO analysis involved 1154 different functional type. Osteoblast related gene expressions in bone tissue samples of diabetic rats were decreased, and lipid metabolism pathway related gene expression was increased.
NASA Astrophysics Data System (ADS)
Marrufo-Hernández, Norma Alejandra; Hernández-Guerrero, Maribel; Nápoles-Duarte, José Manuel; Palomares-Báez, Juan Pedro; Chávez-Rojo, Marco Antonio
2018-03-01
We present a computational model that describes the diffusion of a hard spheres colloidal fluid through a membrane. The membrane matrix is modeled as a series of flat parallel planes with circular pores of different sizes and random spatial distribution. This model was employed to determine how the size distribution of the colloidal filtrate depends on the size distributions of both, the particles in the feed and the pores of the membrane, as well as to describe the filtration kinetics. A Brownian dynamics simulation study considering normal distributions was developed in order to determine empirical correlations between the parameters that characterize these distributions. The model can also be extended to other distributions such as log-normal. This study could, therefore, facilitate the selection of membranes for industrial or scientific filtration processes once the size distribution of the feed is known and the expected characteristics in the filtrate have been defined.
Dukic, T; Hanson, L; Falkmer, T
2006-01-15
The study examined the effects of manual control locations on two groups of randomly selected young and old drivers in relation to visual time off road, steering wheel deviation and safety perception. Measures of visual time off road, steering wheel deviations and safety perception were performed with young and old drivers during real traffic. The results showed an effect of both driver's age and button location on the dependent variables. Older drivers spent longer visual time off road when pushing the buttons and had larger steering wheel deviations. Moreover, the greater the eccentricity between the normal line of sight and the button locations, the longer the visual time off road and the larger the steering wheel deviations. No interaction effect between button location and age was found with regard to visual time off road. Button location had an effect on perceived safety: the further away from the normal line of sight the lower the rating.
Effect of hypoxic breathing on cutaneous temperature recovery in man
NASA Astrophysics Data System (ADS)
Fahim, Mohammad
1992-03-01
Effect of hypoxia (12% O2) on skin temperature recovery was studied on healthy young men. Forty male volunteers free of any respiratory disorder were randomly selected to participate in the study. Skin temperature, peripheral blood flow, heart rate and end expiratory PO2 and PCO2 were measured. During hyoxic ventilation the peripheral blood flow was reduced and a corresponding drop in skin temperature occurred. This was partly due to hyperventilation associated with hypoxic ventilation. The recovery of skin temperature after cooling the hand for 2 min in cold water (10 12° C) took 5.5±0.1 min during normal air breathing; during hypoxic ventilation even after 9.1±0.3 min when the skin temperature recovery curve plateaued, the skin temperature remained about 2° C below control. The results of the present investigation indicate that hypoxia interferes with the normal functioning of the thermoregulatory mechanism in man. Hyperventilation associated with hypoxic ventilation is also partly responsible for incomplete recovery of skin temperature.
Turner, Gareth D. H.; Dudka-Ruszkowska, Wioleta; Taylor, Stephen; Meyts, Ewa Rajpert-De; Goriely, Anne; Wilkie, Andrew O. M.
2012-01-01
The dominant congenital disorders Apert syndrome, achondroplasia and multiple endocrine neoplasia–caused by specific missense mutations in the FGFR2, FGFR3 and RET proteins respectively–represent classical examples of paternal age-effect mutation, a class that arises at particularly high frequencies in the sperm of older men. Previous analyses of DNA from randomly selected cadaveric testes showed that the levels of the corresponding FGFR2, FGFR3 and RET mutations exhibit very uneven spatial distributions, with localised hotspots surrounded by large mutation-negative areas. These studies imply that normal testes are mosaic for clusters of mutant cells: these clusters are predicted to have altered growth and signalling properties leading to their clonal expansion (selfish spermatogonial selection), but DNA extraction eliminates the possibility to study such processes at a tissue level. Using a panel of antibodies optimised for the detection of spermatocytic seminoma, a rare tumour of spermatogonial origin, we demonstrate that putative clonal events are frequent within normal testes of elderly men (mean age: 73.3 yrs) and can be classed into two broad categories. We found numerous small (less than 200 cells) cellular aggregations with distinct immunohistochemical characteristics, localised to a portion of the seminiferous tubule, which are of uncertain significance. However more infrequently we identified additional regions where entire seminiferous tubules had a circumferentially altered immunohistochemical appearance that extended through multiple serial sections that were physically contiguous (up to 1 mm in length), and exhibited enhanced staining for antibodies both to FGFR3 and a marker of downstream signal activation, pAKT. These findings support the concept that populations of spermatogonia in individual seminiferous tubules in the testes of older men are clonal mosaics with regard to their signalling properties and activation, thus fulfilling one of the specific predictions of selfish spermatogonial selection. PMID:22879958
Choosing the Allometric Exponent in Covariate Model Building.
Sinha, Jaydeep; Al-Sallami, Hesham S; Duffull, Stephen B
2018-04-27
Allometric scaling is often used to describe the covariate model linking total body weight (WT) to clearance (CL); however, there is no consensus on how to select its value. The aims of this study were to assess the influence of between-subject variability (BSV) and study design on (1) the power to correctly select the exponent from a priori choices, and (2) the power to obtain unbiased exponent estimates. The influence of WT distribution range (randomly sampled from the Third National Health and Nutrition Examination Survey, 1988-1994 [NHANES III] database), sample size (N = 10, 20, 50, 100, 200, 500, 1000 subjects), and BSV on CL (low 20%, normal 40%, high 60%) were assessed using stochastic simulation estimation. A priori exponent values used for the simulations were 0.67, 0.75, and 1, respectively. For normal to high BSV drugs, it is almost impossible to correctly select the exponent from an a priori set of exponents, i.e. 1 vs. 0.75, 1 vs. 0.67, or 0.75 vs. 0.67 in regular studies involving < 200 adult participants. On the other hand, such regular study designs are sufficient to appropriately estimate the exponent. However, regular studies with < 100 patients risk potential bias in estimating the exponent. Those study designs with limited sample size and narrow range of WT (e.g. < 100 adult participants) potentially risk either selection of a false value or yielding a biased estimate of the allometric exponent; however, such bias is only relevant in cases of extrapolating the value of CL outside the studied population, e.g. analysis of a study of adults that is used to extrapolate to children.
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
Manjunath, Kavyashree; Uthayakumar, M.; Kanaujia, Shankar P.; Kaul, Sunil C.; Sekar, Kanagaraj; Wadhwa, Renu
2012-01-01
Background and Purpose Withanolides are naturally occurring chemical compounds. They are secondary metabolites produced via oxidation of steroids and structurally consist of a steroid-backbone bound to a lactone or its derivatives. They are known to protect plants against herbivores and have medicinal value including anti-inflammation, anti-cancer, adaptogenic and anti-oxidant effects. Withaferin A (Wi-A) and Withanone (Wi-N) are two structurally similar withanolides isolated from Withania somnifera, also known as Ashwagandha in Indian Ayurvedic medicine. Ashwagandha alcoholic leaf extract (i-Extract), rich in Wi-N, was shown to kill cancer cells selectively. Furthermore, the two closely related purified phytochemicals, Wi-A and Wi-N, showed differential activity in normal and cancer human cells in vitro and in vivo. We had earlier identified several genes involved in cytotoxicity of i-Extract in human cancer cells by loss-of-function assays using either siRNA or randomized ribozyme library. Methodology/Principal Findings In the present study, we have employed bioinformatics tools on four genes, i.e., mortalin, p53, p21 and Nrf2, identified by loss-of-function screenings. We examined the docking efficacy of Wi-N and Wi-A to each of the four targets and found that the two closely related phytochemicals have differential binding properties to the selected cellular targets that can potentially instigate differential molecular effects. We validated these findings by undertaking parallel experiments on specific gene responses to either Wi-N or Wi-A in human normal and cancer cells. We demonstrate that Wi-A that binds strongly to the selected targets acts as a strong cytotoxic agent both for normal and cancer cells. Wi-N, on the other hand, has a weak binding to the targets; it showed milder cytotoxicity towards cancer cells and was safe for normal cells. The present molecular docking analyses and experimental evidence revealed important insights to the use of Wi-A and Wi-N for cancer treatment and development of new anti-cancer phytochemical cocktails. PMID:22973447
Application of the LSQR algorithm in non-parametric estimation of aerosol size distribution
NASA Astrophysics Data System (ADS)
He, Zhenzong; Qi, Hong; Lew, Zhongyuan; Ruan, Liming; Tan, Heping; Luo, Kun
2016-05-01
Based on the Least Squares QR decomposition (LSQR) algorithm, the aerosol size distribution (ASD) is retrieved in non-parametric approach. The direct problem is solved by the Anomalous Diffraction Approximation (ADA) and the Lambert-Beer Law. An optimal wavelength selection method is developed to improve the retrieval accuracy of the ASD. The proposed optimal wavelength set is selected by the method which can make the measurement signals sensitive to wavelength and decrease the degree of the ill-condition of coefficient matrix of linear systems effectively to enhance the anti-interference ability of retrieval results. Two common kinds of monomodal and bimodal ASDs, log-normal (L-N) and Gamma distributions, are estimated, respectively. Numerical tests show that the LSQR algorithm can be successfully applied to retrieve the ASD with high stability in the presence of random noise and low susceptibility to the shape of distributions. Finally, the experimental measurement ASD over Harbin in China is recovered reasonably. All the results confirm that the LSQR algorithm combined with the optimal wavelength selection method is an effective and reliable technique in non-parametric estimation of ASD.
A Random Variable Transformation Process.
ERIC Educational Resources Information Center
Scheuermann, Larry
1989-01-01
Provides a short BASIC program, RANVAR, which generates random variates for various theoretical probability distributions. The seven variates include: uniform, exponential, normal, binomial, Poisson, Pascal, and triangular. (MVL)
Normal-incidence quantum cascade detector coupled by nanopore structure
NASA Astrophysics Data System (ADS)
Liu, Jianqi; Wang, Fengjiao; Zhai, Shenqiang; Zhang, Jinchuan; Liu, Shuman; Liu, Junqi; Wang, Lijun; Liu, Fengqi; Wang, Zhanguo
2018-04-01
A normal-incidence quantum cascade detector coupled by a nanopore array structure (NPS) is demonstrated. The NPS is fabricated on top of an In0.53Ga0.47As contact layer by inductively coupled plasma etching using anodic aluminum oxide as a mask. Because of the nonuniform volume fraction at different areas of the device mesa, the NPS acts as subwavelength random gratings. Normal-incidence light can be scattered into random oblique directions for inter-sub-band transition absorption. With normal incidence, the responsivities of the device reach 24 mA/W at 77 K and 15.7 mA/W at 300 K, which are enhanced 2.23 and 1.96 times, respectively, compared with that of the 45°-edge device.
Moeini, Mahdi; Khaleghi, Ali; Amiri, Nasrin; Niknam, Zahra
2014-10-01
The aim of this study was to achieve a better understanding of schizoaffective disorder. Therefore, we obtained electroencephalogram (EEG) signals from patients with schizoaffective disorder and analyzed them in comparison to normal subjects. Forty patients with schizoaffective disorder and 40 normal subjects were selected randomly and their electroencephalogram signals were recorded based on 10-20 international system by 23 electrodes in open- and closed-eyes while they were sitting on a chair comfortably. After preprocessing for noise removal and artifact reduction, we took 60- second segments from each recorded signals. Then, the absolute and relative powers of these segments were evaluated in all channels and in 4 frequency bands (i.e., delta, theta, alpha and beta waves). Finally, Data were analyzed by independent t-test using SPSS software. A significant decrease in relative power in the alpha band, a significant decrease in power spectra in the alpha band and a significant increase in power spectra in the beta band were found in patients compared to normal subjects (P < 0.05). The predominant wave in the centro-parietal region was the beta wave in patients, but it was the alpha band in normal subjects (P = 0.048). Also, the predominant wave of the occipital region in patients was the delta wave, while it was the alpha wave in normal subjects (P = 0.038). Considering the findings, particularly based on the significant decrease of the alpha waves in schizoaffective patients, it can be concluded that schizoaffective disorder can be seen in schizophrenia spectrum.
Pengpid, Supa; Peltzer, Karl; Skaal, Linda
2014-06-06
In persons 15 years and above in South Africa the prevalence of pre-diabetes and diabetes has been estimated at 9.1% and 9.6%, respectively, and the prevalence of systolic prehypertension and hypertension, 38.2% and 24.6%, respectively. Elevated blood glucose and elevated blood pressure are prototype of preventable chronic cardiovascular disease risk factors.Lifestyle interventions have been shown to control high normal blood pressure and/or high normal blood glucose. This study proposes to evaluate the efficacy of a community (church)-based lifestyle intervention programme to control high normal blood pressure and/or high normal blood glucose in church members in a randomized controlled trial in Gauteng, South Africa. The objectives are to: (1) measure non-communicable diseases profile, including hypertension and diabetes, health behaviours, weight management and psychological distress of church members; (2) measure the reduction of blood glucose and blood pressure levels after the intervention; (3) prevent the development of impaired glucose tolerance; (4) compare health behaviours, weight management and psychological distress, blood glucose and blood pressure levels between intervention and control groups, and within group during 6, 12, 24 and 36 months during and post intervention. The study will use a group-randomized design, recruiting 300 church members from 12 churches. Churches will be randomly assigned to experimental and control conditions. Lifestyle interventions may prevent from the development of high blood pressure and/or diabetes. The findings will impact public health and will enable the health ministry to formulate policy related to lifestyle interventions to control blood pressure and glucose. PACTR201105000297151.
Automatic learning-based beam angle selection for thoracic IMRT
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amit, Guy; Marshall, Andrea; Purdie, Thomas G., E-mail: tom.purdie@rmp.uhn.ca
Purpose: The treatment of thoracic cancer using external beam radiation requires an optimal selection of the radiation beam directions to ensure effective coverage of the target volume and to avoid unnecessary treatment of normal healthy tissues. Intensity modulated radiation therapy (IMRT) planning is a lengthy process, which requires the planner to iterate between choosing beam angles, specifying dose–volume objectives and executing IMRT optimization. In thorax treatment planning, where there are no class solutions for beam placement, beam angle selection is performed manually, based on the planner’s clinical experience. The purpose of this work is to propose and study a computationallymore » efficient framework that utilizes machine learning to automatically select treatment beam angles. Such a framework may be helpful for reducing the overall planning workload. Methods: The authors introduce an automated beam selection method, based on learning the relationships between beam angles and anatomical features. Using a large set of clinically approved IMRT plans, a random forest regression algorithm is trained to map a multitude of anatomical features into an individual beam score. An optimization scheme is then built to select and adjust the beam angles, considering the learned interbeam dependencies. The validity and quality of the automatically selected beams evaluated using the manually selected beams from the corresponding clinical plans as the ground truth. Results: The analysis included 149 clinically approved thoracic IMRT plans. For a randomly selected test subset of 27 plans, IMRT plans were generated using automatically selected beams and compared to the clinical plans. The comparison of the predicted and the clinical beam angles demonstrated a good average correspondence between the two (angular distance 16.8° ± 10°, correlation 0.75 ± 0.2). The dose distributions of the semiautomatic and clinical plans were equivalent in terms of primary target volume coverage and organ at risk sparing and were superior over plans produced with fixed sets of common beam angles. The great majority of the automatic plans (93%) were approved as clinically acceptable by three radiation therapy specialists. Conclusions: The results demonstrated the feasibility of utilizing a learning-based approach for automatic selection of beam angles in thoracic IMRT planning. The proposed method may assist in reducing the manual planning workload, while sustaining plan quality.« less
Correlated randomness: Some examples of exotic statistical physics
NASA Astrophysics Data System (ADS)
Stanley, H. Eugene
2005-05-01
One challenge of biology, medicine, and economics is that the systems treated by these sciences have no perfect metronome in time and no perfect spatial architecture -- crystalline or otherwise. Nonetheless, as if by magic, out of nothing but randomness one finds remarkably fine-tuned processes in time and remarkably fine-tuned structures in space. To understand this `miracle', one might consider placing aside the human tendency to see the universe as a machine. Instead, one might address the challenge of uncovering how, through randomness (albeit, as we shall see, strongly correlated randomness), one can arrive at many spatial and temporal patterns in biology, medicine, and economics. Inspired by principles developed by statistical physics over the past 50 years -- scale invariance and universality -- we review some recent applications of correlated randomness to fields that might startle Boltzmann if he were alive today.
Ringed Seal Search for Global Optimization via a Sensitive Search Model
Saadi, Younes; Yanto, Iwan Tri Riyadi; Herawan, Tutut; Balakrishnan, Vimala; Chiroma, Haruna; Risnumawan, Anhar
2016-01-01
The efficiency of a metaheuristic algorithm for global optimization is based on its ability to search and find the global optimum. However, a good search often requires to be balanced between exploration and exploitation of the search space. In this paper, a new metaheuristic algorithm called Ringed Seal Search (RSS) is introduced. It is inspired by the natural behavior of the seal pup. This algorithm mimics the seal pup movement behavior and its ability to search and choose the best lair to escape predators. The scenario starts once the seal mother gives birth to a new pup in a birthing lair that is constructed for this purpose. The seal pup strategy consists of searching and selecting the best lair by performing a random walk to find a new lair. Affected by the sensitive nature of seals against external noise emitted by predators, the random walk of the seal pup takes two different search states, normal state and urgent state. In the normal state, the pup performs an intensive search between closely adjacent lairs; this movement is modeled via a Brownian walk. In an urgent state, the pup leaves the proximity area and performs an extensive search to find a new lair from sparse targets; this movement is modeled via a Levy walk. The switch between these two states is realized by the random noise emitted by predators. The algorithm keeps switching between normal and urgent states until the global optimum is reached. Tests and validations were performed using fifteen benchmark test functions to compare the performance of RSS with other baseline algorithms. The results show that RSS is more efficient than Genetic Algorithm, Particles Swarm Optimization and Cuckoo Search in terms of convergence rate to the global optimum. The RSS shows an improvement in terms of balance between exploration (extensive) and exploitation (intensive) of the search space. The RSS can efficiently mimic seal pups behavior to find best lair and provide a new algorithm to be used in global optimization problems. PMID:26790131
Le Boedec, Kevin
2016-12-01
According to international guidelines, parametric methods must be chosen for RI construction when the sample size is small and the distribution is Gaussian. However, normality tests may not be accurate at small sample size. The purpose of the study was to evaluate normality test performance to properly identify samples extracted from a Gaussian population at small sample sizes, and assess the consequences on RI accuracy of applying parametric methods to samples that falsely identified the parent population as Gaussian. Samples of n = 60 and n = 30 values were randomly selected 100 times from simulated Gaussian, lognormal, and asymmetric populations of 10,000 values. The sensitivity and specificity of 4 normality tests were compared. Reference intervals were calculated using 6 different statistical methods from samples that falsely identified the parent population as Gaussian, and their accuracy was compared. Shapiro-Wilk and D'Agostino-Pearson tests were the best performing normality tests. However, their specificity was poor at sample size n = 30 (specificity for P < .05: .51 and .50, respectively). The best significance levels identified when n = 30 were 0.19 for Shapiro-Wilk test and 0.18 for D'Agostino-Pearson test. Using parametric methods on samples extracted from a lognormal population but falsely identified as Gaussian led to clinically relevant inaccuracies. At small sample size, normality tests may lead to erroneous use of parametric methods to build RI. Using nonparametric methods (or alternatively Box-Cox transformation) on all samples regardless of their distribution or adjusting, the significance level of normality tests depending on sample size would limit the risk of constructing inaccurate RI. © 2016 American Society for Veterinary Clinical Pathology.
Coupled vibrations of rectangular buildings subjected to normally-incident random wind loads
Safak, E.; Foutch, D.A.
1987-01-01
A method for analyzing the three-directional coupled dynamic response of wind-excited buildings is presented. The method is based on a random vibration concept and is parallel to those currently used for analyzing alongwind response. Only the buildings with rectangular cross-section and normally-incident wind are considered. The alongwind pressures and their correlations are represented by the well-known expressions that are available in the literature. The acrosswind forces are assumed to be mainly due to vortex shedding. The torque acting on the building is taken as the sum of the torque due to random alongwind forces plus the torque due to asymmetric acrosswind forces. The study shows the following: (1) amplitude of acrosswind vibrations can be several times greater than that of alongwind vibrations; (2) torsional vibrations are significant if the building has large frontal width, and/or it is asymmetric, and/or its torsional natural frequency is low; (3) even a perfectly symmetric structure with normally incident wind can experience significant torsional vibrations due to the randomness of wind pressures. ?? 1987.
NASA Astrophysics Data System (ADS)
Yan, Wang-Ji; Ren, Wei-Xin
2016-12-01
Recent advances in signal processing and structural dynamics have spurred the adoption of transmissibility functions in academia and industry alike. Due to the inherent randomness of measurement and variability of environmental conditions, uncertainty impacts its applications. This study is focused on statistical inference for raw scalar transmissibility functions modeled as complex ratio random variables. The goal is achieved through companion papers. This paper (Part I) is dedicated to dealing with a formal mathematical proof. New theorems on multivariate circularly-symmetric complex normal ratio distribution are proved on the basis of principle of probabilistic transformation of continuous random vectors. The closed-form distributional formulas for multivariate ratios of correlated circularly-symmetric complex normal random variables are analytically derived. Afterwards, several properties are deduced as corollaries and lemmas to the new theorems. Monte Carlo simulation (MCS) is utilized to verify the accuracy of some representative cases. This work lays the mathematical groundwork to find probabilistic models for raw scalar transmissibility functions, which are to be expounded in detail in Part II of this study.
Analytical and Experimental Random Vibration of Nonlinear Aeroelastic Structures.
1987-01-28
firstorder differential equations. In view of the system complexi- ty an attempt s made to close the infinite hierarchy by using a Gaussian scheme. This sc...year of this project-. When the first normal mode is externally excited by a band-limited random excitation, the system mean square response is found...governed mainly by the internal detuning parameter and the system damping ratios. The results are completely different when the second normal mode is
Corneal Epithelium Thickness Profile in 614 Normal Chinese Children Aged 7-15 Years Old.
Ma, Yingyan; He, Xiangui; Zhu, Xiaofeng; Lu, Lina; Zhu, Jianfeng; Zou, Haidong
2016-03-23
The purpose of the study is to describe the values and distribution of corneal epithelium thickness (CET) in normal Chinese school-aged children, and to explore associated factors with CET. CET maps were measured by Fourier-domain optical coherence tomography (FD-OCT) in normal Chinese children aged 7 to 15 years old from two randomly selected schools in Shanghai, China. Children with normal intraocular pressure were further examined for cycloplegic autorefraction, corneal curvature radius (CCR) and axial length. Central (2-mm diameter area), para-central (2- to 5-mm diameter area), and peripheral (5- to 6-mm diameter area) CET in the superior, superotemporal, temporal, inferotemporal, inferior, inferonasal, nasal, superonasal cornea; minimum, maximum, range, and standard deviation of CET within the 5-mm diameter area were recorded. The CET was thinner in the superior than in the inferior and was thinner in the temporal than in the nasal. The maximum CET was located in the inferior zone, and the minimum CET was in the superior zone. A thicker central CET was associated with male gender (p = 0.009) and older age (p = 0.037) but not with CCR (p = 0.061), axial length (p = 0.253), or refraction (p = 0.351) in the multiple regression analyses. CCR, age, and gender were correlated with para-central and peripheral CET.
Mechanical analysis of the roundhouse kick according to height and distance in taekwondo.
Estevan, I; Falco, C
2013-12-01
Competition regulation in taekwondo has experienced several changes during the last few years, for example, kicks to the head score more points than kicks to the chest. In addition, some external factors such as the height of target and execution distance seem to affect the kick performance. The aim of this study was to analyse selected biomechanical parameters (impact force, reaction time, and execution time) according to the height and execution distance in two different male groups (experts (n = 12) and novices (n = 21)). Athletes kicked twice from every execution distance (short, normal and long) and towards two different heights of target (chest and head) in a random order. Novices kicked to the head with a longer reaction time than to the chest (p < 0.05) but experts were able to kick with similar performance for both heights. From short and normal distances experts kicked with similar performance; whereas from the normal distance novices had longer reaction and execution time than from the short distance (p < 0.05). In conclusion, in counterattacking situations, experts should perform the roundhouse kick to the head instead of to the chest, because it produces better scores with similar performance; whereas novice athletes should avoid kicking to the head because they are not able to kick with similar performance. Moreover, it is recommended that during counterattacks higher-level taekwondo athletes should intend to kick from normal distances.
MECHANICAL ANALYSIS OF THE ROUNDHOUSE KICK ACCORDING TO HEIGHT AND DISTANCE IN TAEKWONDO
Falco, C.
2013-01-01
Competition regulation in taekwondo has experienced several changes during the last few years, for example, kicks to the head score more points than kicks to the chest. In addition, some external factors such as the height of target and execution distance seem to affect the kick performance. The aim of this study was to analyse selected biomechanical parameters (impact force, reaction time, and execution time) according to the height and execution distance in two different male groups (experts (n = 12) and novices (n = 21)). Athletes kicked twice from every execution distance (short, normal and long) and towards two different heights of target (chest and head) in a random order. Novices kicked to the head with a longer reaction time than to the chest (p < 0.05) but experts were able to kick with similar performance for both heights. From short and normal distances experts kicked with similar performance; whereas from the normal distance novices had longer reaction and execution time than from the short distance (p < 0.05). In conclusion, in counterattacking situations, experts should perform the roundhouse kick to the head instead of to the chest, because it produces better scores with similar performance; whereas novice athletes should avoid kicking to the head because they are not able to kick with similar performance. Moreover, it is recommended that during counterattacks higher-level taekwondo athletes should intend to kick from normal distances. PMID:24744499
McCaw, J; Ellis, M; Brewer, M S; McKeith, F K
1997-06-01
Pigs (n = 18) were selected to represent three different muscle conditions (six pigs per condition): normal: dark, firm, and dry; and halothane carrier. A 45-cm-long longissimus section was excised from each side of the carcass at 30 min postmortem and cut into six sections. Right side sections were assigned to the intermediate temperature incubation (23 degrees C), and left side sections were designated high temperature incubation (40 degrees C). Sections were randomly assigned to incubation times (0, 1, 2, 4, 6, or 8 h). The 0 h section from each incubation treatment was designated as a control and was placed directly into a 4 degree C cooler. Temperature and pH were evaluated on the control section and for each loin section a the end of the incubation time. Color (L*, a*, and b* values), percentage of purge loss, water-holding capacity, and drip loss were determined. Incubation treatment did not alter pH decline in dark, firm, and dry muscle; however, high temperature increased pH decline in normal and halothane carrier samples. Results suggest that there is a strong interaction between pH and temperature that affects pork quality attributes. High incubation temperature had a negative effect on most quality variables; however, muscle condition (normal or halothane carrier) had limited effects on muscle quality.
Physical activity and its related motivational attributes in adolescents with different BMI.
Hwang, J; Kim, Y H
2013-03-01
A number of obesity studies have been focused on identifying the relationships between socioeconomic status and physical activity involvement. In behavioral medicine, the limited data are available on obese people's physical activity and its related psychological predictors based on psychological theories. To identify the differences in physical activity and its related motivational attributes among normal weight, overweight, and obese adolescents and to find the effect of body mass index (BMI) and the Self-Determination Theory (SDT) constructs in predicting physical activity. One thousand seventy-one students ranging from seventh to ninth grades were randomly selected from three junior high schools in Seoul (359 normal weight students, 468 overweight students, and 244 obese students). A Korean version of Behavioral Regulation in Exercise Questionnaire-2 and Leisure Time Exercise Questionnaire were applied to measure the participants' motivational attributes and physical activity. Overweight and obese adolescents showed higher scores on amotivation and externally motivated regulations for physical activity than their normal weight counterparts. Internal regulation was more significant for physical activity in normal weight adolescent. However, there was no difference in physical activity among the three groups. Additionally, the findings identified that BMI and the SDT constructs were significant to explain physical activity. This study offers fundamental knowledge in gaining a clearer understanding of the types of motivation most likely to contribute to the initiation and promotion of physical activity in overweight and obese adolescents.
2012-01-01
Background Single embryo transfer (SET) remains underutilized as a strategy to reduce multiple gestation risk in IVF, and its overall lower pregnancy rate underscores the need for improved techniques to select one embryo for fresh transfer. This study explored use of comprehensive chromosomal screening by array CGH (aCGH) to provide this advantage and improve pregnancy rate from SET. Methods First-time IVF patients with a good prognosis (age <35, no prior miscarriage) and normal karyotype seeking elective SET were prospectively randomized into two groups: In Group A, embryos were selected on the basis of morphology and comprehensive chromosomal screening via aCGH (from d5 trophectoderm biopsy) while Group B embryos were assessed by morphology only. All patients had a single fresh blastocyst transferred on d6. Laboratory parameters and clinical pregnancy rates were compared between the two groups. Results For patients in Group A (n = 55), 425 blastocysts were biopsied and analyzed via aCGH (7.7 blastocysts/patient). Aneuploidy was detected in 191/425 (44.9%) of blastocysts in this group. For patients in Group B (n = 48), 389 blastocysts were microscopically examined (8.1 blastocysts/patient). Clinical pregnancy rate was significantly higher in the morphology + aCGH group compared to the morphology-only group (70.9 and 45.8%, respectively; p = 0.017); ongoing pregnancy rate for Groups A and B were 69.1 vs. 41.7%, respectively (p = 0.009). There were no twin pregnancies. Conclusion Although aCGH followed by frozen embryo transfer has been used to screen at risk embryos (e.g., known parental chromosomal translocation or history of recurrent pregnancy loss), this is the first description of aCGH fully integrated with a clinical IVF program to select single blastocysts for fresh SET in good prognosis patients. The observed aneuploidy rate (44.9%) among biopsied blastocysts highlights the inherent imprecision of SET when conventional morphology is used alone. Embryos randomized to the aCGH group implanted with greater efficiency, resulted in clinical pregnancy more often, and yielded a lower miscarriage rate than those selected without aCGH. Additional studies are needed to verify our pilot data and confirm a role for on-site, rapid aCGH for IVF patients contemplating fresh SET. PMID:22551456
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
Significance of Random Bladder Biopsies in Non-Muscle Invasive Bladder Cancer
Kumano, Masafumi; Miyake, Hideaki; Nakano, Yuzo; Fujisawa, Masato
2013-01-01
Background/Aims To evaluate retrospectively the clinical outcome of random bladder biopsies in patients with non-muscle invasive bladder cancer (NMIBC) undergoing transurethral resection (TUR). Patients and Method This study included 234 consecutive patients with NMIBC who underwent random biopsies from normal-appearing urothelium of the bladder, including the anterior wall, posterior wall, right wall, left wall, dome, trigone and/or prostatic urethra, during TUR. Result Thirty-seven patients (15.8%) were diagnosed by random biopsies as having urothelial cancer. Among several factors available prior to TUR, preoperative urinary cytology appeared to be independently related to the detection of urothelial cancer in random biopsies on multivariate analysis. Urinary cytology prior to TUR gave 50.0% sensitivity, 91.7% specificity, 56.8% positive predictive value and 89.3% negative predictive value for predicting the findings of the random biopsies. Conclusion Biopsies of normal-appearing urothelium resulted in the additional detection of urothelial cancer in a definite proportion of NMIBC patients, and it remains difficult to find a reliable alternative to random biopsies. Collectively, these findings suggest that it would be beneficial to perform random biopsies as part of the routine management of NMIBC. PMID:24917759
NASA Astrophysics Data System (ADS)
Tian, Xiange; Xi Gu, James; Rehab, Ibrahim; Abdalla, Gaballa M.; Gu, Fengshou; Ball, A. D.
2018-02-01
Envelope analysis is a widely used method for rolling element bearing fault detection. To obtain high detection accuracy, it is critical to determine an optimal frequency narrowband for the envelope demodulation. However, many of the schemes which are used for the narrowband selection, such as the Kurtogram, can produce poor detection results because they are sensitive to random noise and aperiodic impulses which normally occur in practical applications. To achieve the purposes of denoising and frequency band optimisation, this paper proposes a novel modulation signal bispectrum (MSB) based robust detector for bearing fault detection. Because of its inherent noise suppression capability, the MSB allows effective suppression of both stationary random noise and discrete aperiodic noise. The high magnitude features that result from the use of the MSB also enhance the modulation effects of a bearing fault and can be used to provide optimal frequency bands for fault detection. The Kurtogram is generally accepted as a powerful means of selecting the most appropriate frequency band for envelope analysis, and as such it has been used as the benchmark comparator for performance evaluation in this paper. Both simulated and experimental data analysis results show that the proposed method produces more accurate and robust detection results than Kurtogram based approaches for common bearing faults under a range of representative scenarios.
Ceiling effect of online user interests for the movies
NASA Astrophysics Data System (ADS)
Ni, Jing; Zhang, Yi-Lu; Hu, Zhao-Long; Song, Wen-Jun; Hou, Lei; Guo, Qiang; Liu, Jian-Guo
2014-05-01
Online users' collective interests play an important role for analyzing the online social networks and personalized recommendations. In this paper, we introduce the information entropy to measure the diversity of the user interests. We empirically analyze the information entropy of the objects selected by the users with the same degree in both the MovieLens and Netflix datasets. The results show that as the user degree increases, the entropy increases from the lowest value at first to the highest value and then begins to fall, which indicates that the interests of the small-degree and large-degree users are more centralized, while the interests of normal users are more diverse. Furthermore, a null model is proposed to compare with the empirical results. In a null model, we keep the number of users and objects as well as the user degrees unchangeable, but the selection behaviors are totally random in both datasets. Results show that the diversity of the majority of users in the real datasets is higher than that the random case, with the exception of the diversity of only a fraction of small-degree users. That may because new users just like popular objects, while with the increase of the user experiences, they quickly become users of broad interests. Therefore, small-degree users' interests are much easier to predict than the other users', which may shed some light for the cold-start problem.
An Analysis of Depression, Self-Harm, and Suicidal Ideation Content on Tumblr
Cavazos-Rehg, Patricia A.; Krauss, Melissa J.; Sowles, Shaina J.; Connolly, Sarah; Rosas, Carlos; Bharadwaj, Meghana; Grucza, Richard; Bierut, Laura J.
2016-01-01
Background Social networking about depression can be indicative of self-reported depression and/or can normalize risk behaviors such as self-harm and suicidal ideation. Aim To gain a better understanding of the depression, self-harm, and suicidal content that is being shared on Tumblr. Method From April 16 to May 10, 2014, 17 popular depression-related Tumblr accounts were monitored for new posts and engagement with other Tumblr users. A total of 3,360 posts were randomly selected from all historical posts from these accounts and coded based on themes ascertained by the research team. Results The 17 Tumblr accounts posted a median number of 185 posts (range = 0–2,954). Content was engaged with (i.e., re-blogged or liked) a median number of 1,677,362 times (range = 0–122,186,504). Of the 3,360 randomly selected posts, 2,739 (82%) were related to depression, suicide, or self-harm. Common themes were self-loathing (412, 15%), loneliness/feeling unloved (405, 15%), self-harm (407, 15%), and suicide (372, 14%). Conclusion This study takes an important first step at better understanding the displayed depression-related references on Tumblr. The findings signal a need for suicide prevention efforts to intervene on Tumblr and use this platform in a strategic way, given the depression and suicidal content that was readily observed on Tumblr. PMID:27445014
The infinitesimal model: Definition, derivation, and implications.
Barton, N H; Etheridge, A M; Véber, A
2017-12-01
Our focus here is on the infinitesimal model. In this model, one or several quantitative traits are described as the sum of a genetic and a non-genetic component, the first being distributed within families as a normal random variable centred at the average of the parental genetic components, and with a variance independent of the parental traits. Thus, the variance that segregates within families is not perturbed by selection, and can be predicted from the variance components. This does not necessarily imply that the trait distribution across the whole population should be Gaussian, and indeed selection or population structure may have a substantial effect on the overall trait distribution. One of our main aims is to identify some general conditions on the allelic effects for the infinitesimal model to be accurate. We first review the long history of the infinitesimal model in quantitative genetics. Then we formulate the model at the phenotypic level in terms of individual trait values and relationships between individuals, but including different evolutionary processes: genetic drift, recombination, selection, mutation, population structure, …. We give a range of examples of its application to evolutionary questions related to stabilising selection, assortative mating, effective population size and response to selection, habitat preference and speciation. We provide a mathematical justification of the model as the limit as the number M of underlying loci tends to infinity of a model with Mendelian inheritance, mutation and environmental noise, when the genetic component of the trait is purely additive. We also show how the model generalises to include epistatic effects. We prove in particular that, within each family, the genetic components of the individual trait values in the current generation are indeed normally distributed with a variance independent of ancestral traits, up to an error of order 1∕M. Simulations suggest that in some cases the convergence may be as fast as 1∕M. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.
Hua, Alexandra; Major, Nili
2016-02-01
Selective mutism is a disorder in which an individual fails to speak in certain social situations though speaks normally in other settings. Most commonly, this disorder initially manifests when children fail to speak in school. Selective mutism results in significant social and academic impairment in those affected by it. This review will summarize the current understanding of selective mutism with regard to diagnosis, epidemiology, cause, prognosis, and treatment. Studies over the past 20 years have consistently demonstrated a strong relationship between selective mutism and anxiety, most notably social phobia. These findings have led to the recent reclassification of selective mutism as an anxiety disorder in the Diagnostic and Statistical Manual of Mental Disorders, 5th Edition. In addition to anxiety, several other factors have been implicated in the development of selective mutism, including communication delays and immigration/bilingualism, adding to the complexity of the disorder. In the past few years, several randomized studies have supported the efficacy of psychosocial interventions based on a graduated exposure to situations requiring verbal communication. Less data are available regarding the use of pharmacologic treatment, though there are some studies that suggest a potential benefit. Selective mutism is a disorder that typically emerges in early childhood and is currently conceptualized as an anxiety disorder. The development of selective mutism appears to result from the interplay of a variety of genetic, temperamental, environmental, and developmental factors. Although little has been published about selective mutism in the general pediatric literature, pediatric clinicians are in a position to play an important role in the early diagnosis and treatment of this debilitating condition.
NASA Astrophysics Data System (ADS)
Gharekhan, Anita H.; Biswal, Nrusingh C.; Gupta, Sharad; Pradhan, Asima; Sureshkumar, M. B.; Panigrahi, Prasanta K.
2008-02-01
The statistical and characteristic features of the polarized fluorescence spectra from cancer, normal and benign human breast tissues are studied through wavelet transform and singular value decomposition. The discrete wavelets enabled one to isolate high and low frequency spectral fluctuations, which revealed substantial randomization in the cancerous tissues, not present in the normal cases. In particular, the fluctuations fitted well with a Gaussian distribution for the cancerous tissues in the perpendicular component. One finds non-Gaussian behavior for normal and benign tissues' spectral variations. The study of the difference of intensities in parallel and perpendicular channels, which is free from the diffusive component, revealed weak fluorescence activity in the 630nm domain, for the cancerous tissues. This may be ascribable to porphyrin emission. The role of both scatterers and fluorophores in the observed minor intensity peak for the cancer case is experimentally confirmed through tissue-phantom experiments. Continuous Morlet wavelet also highlighted this domain for the cancerous tissue fluorescence spectra. Correlation in the spectral fluctuation is further studied in different tissue types through singular value decomposition. Apart from identifying different domains of spectral activity for diseased and non-diseased tissues, we found random matrix support for the spectral fluctuations. The small eigenvalues of the perpendicular polarized fluorescence spectra of cancerous tissues fitted remarkably well with random matrix prediction for Gaussian random variables, confirming our observations about spectral fluctuations in the wavelet domain.
NASA Astrophysics Data System (ADS)
Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia
2014-05-01
One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.
Benzaquen, M; Galvão, K N; Coleman, A E; Santos, J E P; Goff, J P; Risco, C A
2015-05-01
The objectives of this study were to determine the effect of mineral/energy supplementation of dairy cows with dystocia on blood mineral concentrations, energetic and inflammatory profiles, and milk yield. Multiparous Holstein cows with dystocia were randomly assigned into two groups, (1) treated with a mineral/energy supplement (DME, n= 18) and (2) not treated (DNT, n= 22). A group of cows with normal parturition were randomly selected and were left untreated (NNT, n= 25). Cows in DME received an oral drench of 110 g of calcium and 400 g of propionate as calcium propionate plus 110 g potassium chloride and 150 g of magnesium sulfate administered within 6 h of calving and again 3 days post-partum. Compared to cows with a normal parturition, dystocic cows had decreased plasma calcium concentrations, increased plasma haptoglobin, decreased milk yield at 1 day post-partum, and tended to have increased rectal temperatures from 1 to 12 days post-partum. Compared with cows in DNT, those in DME had decreased plasma calcium concentrations and increased plasma magnesium concentrations 2 and 3 days post-partum, and a tendency for an increase in rectal temperature from 1 to 12 days post-partum. Dystocia is detrimental to calcium homeostasis post-partum, but mineral/energy supplementation as undertaken in this study is not recommended for use in cows with dystocia. Copyright © 2015 Elsevier Ltd. All rights reserved.
Amini, Nazanin; Rezaei, Korosh; Yazdannik, Ahmadreza
2016-01-01
Background: Formation of biofilm and bacterial colonization within the endotracheal tube (ETT) are significant sources of airway contamination and play a role in the development of ventilator-associated pneumonia (VAP). This study was conducted to examine the effect of nebulized eucalyptus (NE) on bacterial colonization of ETT biofilm. Materials and Methods: We performed a randomized clinical trial in three intensive care units (ICUs) of an educational hospital. Seventy intubated patients were selected and randomly divided into intervention (n = 35) and control (n = 35) groups. The intervention group received 4 ml (5%) of eucalyptus in 6 ml normal saline every 8 h. The placebo group received only 10 ml of normal saline in the same way. On extubation, the interior of the tube was immediately sampled using a sterile swab for standard microbiological analysis. Chi-square and Fisher's exact tests were used for statistical analysis in SPSS. P values less than 0.05 were considered statistically significant. Results: In both samples, Klebsiella pneumoniae and Acinetobacter baumannii were the most frequently isolated bacteria. In the control group, heavy colonization was greater than in the intervention group (P = 0.002). The frequency of isolation of K. pneumoniae in the intervention group was lower than in the control group (P < 0.001). However, there was no difference between the two groups in other isolated bacteria. Conclusions: NE can reduce microbial contamination of the endotracheal tube biofilm in ventilated patients. Moreover, K. pneumoniae was the most sensitive to NE. PMID:27095990
NASA Astrophysics Data System (ADS)
Li, Hai; Kumavor, Patrick; Salman Alqasemi, Umar; Zhu, Quing
2015-01-01
A composite set of ovarian tissue features extracted from photoacoustic spectral data, beam envelope, and co-registered ultrasound and photoacoustic images are used to characterize malignant and normal ovaries using logistic and support vector machine (SVM) classifiers. Normalized power spectra were calculated from the Fourier transform of the photoacoustic beamformed data, from which the spectral slopes and 0-MHz intercepts were extracted. Five features were extracted from the beam envelope and another 10 features were extracted from the photoacoustic images. These 17 features were ranked by their p-values from t-tests on which a filter type of feature selection method was used to determine the optimal feature number for final classification. A total of 169 samples from 19 ex vivo ovaries were randomly distributed into training and testing groups. Both classifiers achieved a minimum value of the mean misclassification error when the seven features with lowest p-values were selected. Using these seven features, the logistic and SVM classifiers obtained sensitivities of 96.39±3.35% and 97.82±2.26%, and specificities of 98.92±1.39% and 100%, respectively, for the training group. For the testing group, logistic and SVM classifiers achieved sensitivities of 92.71±3.55% and 92.64±3.27%, and specificities of 87.52±8.78% and 98.49±2.05%, respectively.
Hu, Xiaoxin; Jiang, Luan; Li, Qiang; Gu, Yajia
2017-02-07
The objective of this study was to evaluate the association betweenthe quantitative assessment of background parenchymal enhancement rate (BPER) and breast cancer. From 14,033 consecutive patients who underwent breast MRI in our center, we randomly selected 101 normal controls. Then, we selected 101 women with benign breast lesions and 101 women with breast cancer who were matched for age and menstruation status. We evaluated BPER at early (2 minutes), medium (4 minutes) and late (6 minutes) enhanced time phases of breast MRI for quantitative assessment. Odds ratios (ORs) for risk of breast cancer were calculated using the receiver operating curve. The BPER increased in a time-dependent manner after enhancement in both premenopausal and postmenopausal women. Premenopausal women had higher BPER than postmenopausal women at early, medium and late enhanced phases. In the normal population, the OR for probability of breast cancer for premenopausal women with high BPER was 4.1 (95% CI: 1.7-9.7) and 4.6 (95% CI: 1.7-12.0) for postmenopausal women. The OR of breast cancer morbidity in premenopausal women with high BPER was 2.6 (95% CI: 1.1-6.4) and 2.8 (95% CI: 1.2-6.1) for postmenopausal women. The BPER was found to be a predictive factor of breast cancer morbidity. Different time phases should be used to assess BPER in premenopausal and postmenopausal women.
Pullicino, Patrick; Thompson, John L P; Barton, Bruce; Levin, Bruce; Graham, Susan; Freudenberger, Ronald S
2006-02-01
Warfarin is widely prescribed for patients with heart failure without level 1 evidence, and an adequately powered randomized study is needed. The Warfarin versus Aspirin in Reduced Cardiac Ejection Fraction study is a National Institutes of Health-funded, randomized, double-blind clinical trial with a target enrollment of 2860 patients. It is designed to test with 90% power the 2-sided primary null hypothesis of no difference between warfarin (International Normalized Ratio 2.5-3) and aspirin (325 mg) in 3- to 5-year event-free survival for the composite endpoint of death, or stroke (ischemic or hemorrhagic) among patients with cardiac ejection fraction < or =35% who do not have atrial fibrillation or mechanical prosthetic heart valves. Secondary analyses will compare warfarin and aspirin for reduction of all-cause mortality, ischemic stroke, and myocardial infarction (MI), balanced against the risk of intracerebral hemorrhage, among women and African Americans; and compare warfarin and aspirin for prevention of stroke alone. Randomization is stratified by site, New York Heart Association (NYHA) heart class (I vs II-IV), and stroke or transient ischemic attack (TIA) within 1 year before randomization versus no stroke or TIA in that period. NYHA class I patients will not exceed 20%, and the study has a target of 20% (or more) patients with stroke or TIA within 12 months. Randomized patients receive active warfarin plus placebo or active aspirin plus placebo, double-blind. The results should help guide the selection of optimum antithrombotic therapy for patients with left ventricular dysfunction.
A Bayesian Nonparametric Meta-Analysis Model
ERIC Educational Resources Information Center
Karabatsos, George; Talbott, Elizabeth; Walker, Stephen G.
2015-01-01
In a meta-analysis, it is important to specify a model that adequately describes the effect-size distribution of the underlying population of studies. The conventional normal fixed-effect and normal random-effects models assume a normal effect-size population distribution, conditionally on parameters and covariates. For estimating the mean overall…
NASA Astrophysics Data System (ADS)
Eckert, Sandra
2016-08-01
The SPOT-5 Take 5 campaign provided SPOT time series data of an unprecedented spatial and temporal resolution. We analysed 29 scenes acquired between May and September 2015 of a semi-arid region in the foothills of Mount Kenya, with two aims: first, to distinguish rainfed from irrigated cropland and cropland from natural vegetation covers, which show similar reflectance patterns; and second, to identify individual crop types. We tested several input data sets in different combinations: the spectral bands and the normalized difference vegetation index (NDVI) time series, principal components of NDVI time series, and selected NDVI time series statistics. For the classification we used random forests (RF). In the test differentiating rainfed cropland, irrigated cropland, and natural vegetation covers, the best classification accuracies were achieved using spectral bands. For the differentiation of crop types, we analysed the phenology of selected crop types based on NDVI time series. First results are promising.
Zhang, Jingchao; Wang, Guoliang; Zhang, Fangxiang; Zhao, Qian
2018-03-01
The protective effect of dexmedetomidine on cognitive dysfunction and decreased attention network function of patients with ischemic cerebrovascular disease after stenting was investigated. Fifty-eight patients with ischemic cerebrovascular disease undergoing stenting in Guizhou Provincial People's Hospital were selected and randomly divided into control group (n=29) and dexmedetomidine group (n=29). The dexmedetomidine group was treated with dexmedetomidine before induced anesthesia, while the control group was given the same dose of normal saline; and the normal volunteers of the same age were selected as the normal group (n=29). At 3 days after operation, the levels of serum S100B and nerve growth factor (NGF) in each group were detected using the enzyme-linked immunosorbent assay, and the level of brain-derived neurotrophic factor (BDNF) was detected via western blotting. Montreal cognitive assessment (MoCA) and attention network test (ANT) were performed. Moreover, the cognitive function and attention network function, and the effects of dexmedetomidine on cognitive function and attention network function were evaluated. The concentrations of serum S100B and NGF in dexmedetomidine group was lower than those in control group (P<0.01). The results of western blotting showed that the levels of serum BDNF in control group and dexmedetomidine group were significantly lower than that in normal group (P<0.01), and it was higher in dexmedetomidine group than that in control group (P<0.01). Besides, both MoCA and ANT results revealed that the visual space and executive function scores, attention scores, delayed memory scores, targeted network efficiency and executive control network efficiency in dexmedetomidine group were obviously higher than those in control group (P<0.01). The cognitive function and attention network function of patients with ischemic cerebrovascular disease have a certain degree of damage, and the preoperative administration of dexmedetomidine can effectively improve the patient's cognitive dysfunction and attention network function after operation.
Karabatsos, George
2017-02-01
Most of applied statistics involves regression analysis of data. In practice, it is important to specify a regression model that has minimal assumptions which are not violated by data, to ensure that statistical inferences from the model are informative and not misleading. This paper presents a stand-alone and menu-driven software package, Bayesian Regression: Nonparametric and Parametric Models, constructed from MATLAB Compiler. Currently, this package gives the user a choice from 83 Bayesian models for data analysis. They include 47 Bayesian nonparametric (BNP) infinite-mixture regression models; 5 BNP infinite-mixture models for density estimation; and 31 normal random effects models (HLMs), including normal linear models. Each of the 78 regression models handles either a continuous, binary, or ordinal dependent variable, and can handle multi-level (grouped) data. All 83 Bayesian models can handle the analysis of weighted observations (e.g., for meta-analysis), and the analysis of left-censored, right-censored, and/or interval-censored data. Each BNP infinite-mixture model has a mixture distribution assigned one of various BNP prior distributions, including priors defined by either the Dirichlet process, Pitman-Yor process (including the normalized stable process), beta (two-parameter) process, normalized inverse-Gaussian process, geometric weights prior, dependent Dirichlet process, or the dependent infinite-probits prior. The software user can mouse-click to select a Bayesian model and perform data analysis via Markov chain Monte Carlo (MCMC) sampling. After the sampling completes, the software automatically opens text output that reports MCMC-based estimates of the model's posterior distribution and model predictive fit to the data. Additional text and/or graphical output can be generated by mouse-clicking other menu options. This includes output of MCMC convergence analyses, and estimates of the model's posterior predictive distribution, for selected functionals and values of covariates. The software is illustrated through the BNP regression analysis of real data.
An Overview of Randomization and Minimization Programs for Randomized Clinical Trials
Saghaei, Mahmoud
2011-01-01
Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659
Cooperation evolution in random multiplicative environments
NASA Astrophysics Data System (ADS)
Yaari, G.; Solomon, S.
2010-02-01
Most real life systems have a random component: the multitude of endogenous and exogenous factors influencing them result in stochastic fluctuations of the parameters determining their dynamics. These empirical systems are in many cases subject to noise of multiplicative nature. The special properties of multiplicative noise as opposed to additive noise have been noticed for a long while. Even though apparently and formally the difference between free additive vs. multiplicative random walks consists in just a move from normal to log-normal distributions, in practice the implications are much more far reaching. While in an additive context the emergence and survival of cooperation requires special conditions (especially some level of reward, punishment, reciprocity), we find that in the multiplicative random context the emergence of cooperation is much more natural and effective. We study the various implications of this observation and its applications in various contexts.
2010-01-01
Background Cluster analysis, and in particular hierarchical clustering, is widely used to extract information from gene expression data. The aim is to discover new classes, or sub-classes, of either individuals or genes. Performing a cluster analysis commonly involve decisions on how to; handle missing values, standardize the data and select genes. In addition, pre-processing, involving various types of filtration and normalization procedures, can have an effect on the ability to discover biologically relevant classes. Here we consider cluster analysis in a broad sense and perform a comprehensive evaluation that covers several aspects of cluster analyses, including normalization. Result We evaluated 2780 cluster analysis methods on seven publicly available 2-channel microarray data sets with common reference designs. Each cluster analysis method differed in data normalization (5 normalizations were considered), missing value imputation (2), standardization of data (2), gene selection (19) or clustering method (11). The cluster analyses are evaluated using known classes, such as cancer types, and the adjusted Rand index. The performances of the different analyses vary between the data sets and it is difficult to give general recommendations. However, normalization, gene selection and clustering method are all variables that have a significant impact on the performance. In particular, gene selection is important and it is generally necessary to include a relatively large number of genes in order to get good performance. Selecting genes with high standard deviation or using principal component analysis are shown to be the preferred gene selection methods. Hierarchical clustering using Ward's method, k-means clustering and Mclust are the clustering methods considered in this paper that achieves the highest adjusted Rand. Normalization can have a significant positive impact on the ability to cluster individuals, and there are indications that background correction is preferable, in particular if the gene selection is successful. However, this is an area that needs to be studied further in order to draw any general conclusions. Conclusions The choice of cluster analysis, and in particular gene selection, has a large impact on the ability to cluster individuals correctly based on expression profiles. Normalization has a positive effect, but the relative performance of different normalizations is an area that needs more research. In summary, although clustering, gene selection and normalization are considered standard methods in bioinformatics, our comprehensive analysis shows that selecting the right methods, and the right combinations of methods, is far from trivial and that much is still unexplored in what is considered to be the most basic analysis of genomic data. PMID:20937082
Hui, Amy Leung; Back, Lisa; Ludwig, Sora; Gardiner, Phillip; Sevenhuysen, Gustaaf; Dean, Heather J; Sellers, Elisabeth; McGavock, Jonathan; Morris, Margaret; Jiang, Depeng; Shen, Garry X
2014-09-24
The objectives of this study were to assess the efficacy of lifestyle intervention on gestational weight gain in pregnant women with normal and above normal body mass index (BMI) in a randomized controlled trial. A total of 116 pregnant women (<20 weeks of pregnancy) without diabetes were enrolled and 113 pregnant women completed the program. Participants were randomized into intervention and control groups. Women in the intervention group received weekly trainer-led group exercise sessions, instructed home exercise for 3-5-times/week during 20-36 weeks of gestation, and dietary counseling twice during pregnancy. Participants in the control group did not receive the intervention. All participants completed a physical activity questionnaire and a 3-day food record at enrolment and 2 months after enrolment. The participants in the intervention group with normal pre-pregnancy BMI (≤24.9 kg/M2, n = 30) had lower gestational weight gain (GWG), offspring birth weight and excessive gestational weight gain (EGWG) on pregnancy weight gain compared to the control group (n = 27, p < 0.05). Those weight related-changes were not detected between the intervention (n = 27) and control group (n = 29) in the above normal pre-pregnancy BMI participants. Intervention reduced total calorie, total fat, saturated fat and cholesterol intake were detected in women with normal or above normal pre-pregnancy BMI compared to the control group (p < 0.05 or 0.01). Increased physical activity and reduced carbohydrate intake were detected in women with normal (p < 0.05), but not above normal, pre-pregnancy BMI at 2 months after the onset of the intervention compared to the control group. The results of the present study demonstrated that the lifestyle intervention program decreased EGWG, GWG, offspring birth weight in pregnant women with normal, but not above normal, pre-pregnancy BMI, which was associated with increased physical activity and decreased carbohydrate intake. NCT00486629.
Levy, Jason A; Bachur, Richard G; Monuteaux, Michael C; Waltzman, Mark
2013-03-01
We seek to determine whether an initial intravenous bolus of 5% dextrose in normal saline solution compared with normal saline solution will lead to a lower proportion of hospitalized patients and a greater reduction in serum ketone levels in children with gastroenteritis and dehydration. We enrolled children aged 6 months to 6 years in a double-blind, randomized controlled trial of patients presenting to a pediatric emergency department. Subjects were randomized to receive a 20 mL/kg infusion of either 5% dextrose in normal saline solution or normal saline solution. Serum ketone levels were measured before and at 1- and 2-hour intervals after the initial study fluid bolus administration. Primary outcome was the proportion of children hospitalized. Secondary outcome was change in serum ketone levels over time. One hundred eighty-eight children were enrolled. The proportion of children hospitalized did not differ between groups (35% in the 5% dextrose in normal saline solution group versus 44% in the normal saline solution group; risk difference 9%; 95% confidence interval [CI] -5% to 22%). Compared with children who received normal saline solution, those who received 5% dextrose in normal saline solution had a greater reduction in mean serum ketone levels at both 1 hour (mean Δ 1.2 versus 0.1 mmol/L; mean difference 1.1 mmol/L; 95% CI 0.4 to 1.9 mmol/L) and 2 hours (mean Δ 1.9 versus 0.3 mmol/L; mean difference 1.6 mmol/L; 95% CI 0.9 to 2.3 mmol/L). Administration of a dextrose-containing bolus compared with normal saline did not lead to a lower rate of hospitalization for children with gastroenteritis and dehydration. There was, however, a greater reduction in serum ketone levels in patients who received 5% dextrose in normal saline solution. Copyright © 2012. Published by Mosby, Inc.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
NASA Astrophysics Data System (ADS)
Ando, K.; Fujita, S.; Ito, J.; Yuasa, S.; Suzuki, Y.; Nakatani, Y.; Miyazaki, T.; Yoda, H.
2014-05-01
Most parts of present computer systems are made of volatile devices, and the power to supply them to avoid information loss causes huge energy losses. We can eliminate this meaningless energy loss by utilizing the non-volatile function of advanced spin-transfer torque magnetoresistive random-access memory (STT-MRAM) technology and create a new type of computer, i.e., normally off computers. Critical tasks to achieve normally off computers are implementations of STT-MRAM technologies in the main memory and low-level cache memories. STT-MRAM technology for applications to the main memory has been successfully developed by using perpendicular STT-MRAMs, and faster STT-MRAM technologies for applications to the cache memory are now being developed. The present status of STT-MRAMs and challenges that remain for normally off computers are discussed.
NASA Technical Reports Server (NTRS)
Peters, C. (Principal Investigator)
1980-01-01
A general theorem is given which establishes the existence and uniqueness of a consistent solution of the likelihood equations given a sequence of independent random vectors whose distributions are not identical but have the same parameter set. In addition, it is shown that the consistent solution is a MLE and that it is asymptotically normal and efficient. Two applications are discussed: one in which independent observations of a normal random vector have missing components, and the other in which the parameters in a mixture from an exponential family are estimated using independent homogeneous sample blocks of different sizes.
Hansen, Adam G.; Beauchamp, David A.
2014-01-01
Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.
Group Counseling With Emotionally Disturbed School Children in Taiwan.
ERIC Educational Resources Information Center
Chiu, Peter
The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial
ERIC Educational Resources Information Center
Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean
2017-01-01
Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…
General Framework for Effect Sizes in Cluster Randomized Experiments
ERIC Educational Resources Information Center
VanHoudnos, Nathan
2016-01-01
Cluster randomized experiments are ubiquitous in modern education research. Although a variety of modeling approaches are used to analyze these data, perhaps the most common methodology is a normal mixed effects model where some effects, such as the treatment effect, are regarded as fixed, and others, such as the effect of group random assignment…
NASA Technical Reports Server (NTRS)
Leybold, H. A.
1971-01-01
Random numbers were generated with the aid of a digital computer and transformed such that the probability density function of a discrete random load history composed of these random numbers had one of the following non-Gaussian distributions: Poisson, binomial, log-normal, Weibull, and exponential. The resulting random load histories were analyzed to determine their peak statistics and were compared with cumulative peak maneuver-load distributions for fighter and transport aircraft in flight.
Maini, Anuj Paul; Wangoo, Anuj; Singh, Sukhman; Mehar, Damanpreet Kaur
2017-01-01
Abstract Introduction The success of a restoration is dependent on accurate shade matching of teeth leading to studies evaluating the factors affecting the perception of shades. Colour vision anomalies including colour blindness have been found to exist in the population and it has been thought to be a potential factor affecting the colour perception ability. Aim The present study was done to evaluate the prevalence of colour vision anomalies and its effect on matching of shades of teeth. Materials and Methods A total of 147 dental professionals were randomly selected for the study and were first tested for visual acuity using the Snellen’s Eye Chart so as to carry on the study with only those operators who had a vision of 6/6. Then, the Ishihara’s colour charts were used to test the operators for colour vision handicap. In the last stage of the study, test for accuracy of shade selection was done using the Vitapan Classical shade guide. The shade guide tabs were covered to avoid bias. Percentage was used to calculate the prevalence of colour vision handicap and its effect on matching of shades of teeth as compared to normal vision, which was evaluated using Chi square test. Results Nineteen operators had colour vision anomalies out of hundred operators and only two operators presented with colour blindness. Colour vision anomaly was more prevalent than colour blindness and it was also found that it was more prevalent in males than females. The difference between the accuracy of shade matching between the operators with normal vision and colour vision defect and operators with normal vision and colour blindness was statistically not significant. Conclusion Colour blindness and colour vision handicap are rare conditions, with the latter being more common in the population. According to our study, it was concluded that no statistically significant difference existed amongst the operators with normal vision and colour vision anomaly or operators with normal vision and colour blindness during the matching of shades of teeth. PMID:28274040
Scharre, Douglas W; Chang, Shu-Ing; Murden, Robert A; Lamb, James; Beversdorf, David Q; Kataki, Maria; Nagaraja, Haikady N; Bornstein, Robert A
2010-01-01
To develop a self-administered cognitive assessment instrument to facilitate the screening of mild cognitive impairment (MCI) and early dementia and determine its association with gold standard clinical assessments including neuropsychologic evaluation. Adults aged above 59 years with sufficient vision and English literacy were recruited from geriatric and memory disorder clinics, educational talks, independent living facilities, senior centers, and memory screens. After Self-administered Gerocognitive Examination (SAGE) screening, subjects were randomly selected to complete a clinical evaluation, neurologic examination, neuropsychologic battery, functional assessment, and mini-mental state examination (MMSE). Subjects were identified as dementia, MCI, or normal based on standard clinical criteria and neuropsychologic testing. Two hundred fifty-four participants took the SAGE screen and 63 subjects completed the extensive evaluation (21 normal, 21 MCI, and 21 dementia subjects). Spearman rank correlation between SAGE and neuropsychologic battery was 0.84 (0.76 for MMSE). SAGE receiver operating characteristics on the basis of clinical diagnosis showed 95% specificity (90% for MMSE) and 79% sensitivity (71% for MMSE) in detecting those with cognitive impairment from normal subjects. This study suggests that SAGE is a reliable instrument for detecting cognitive impairment and compares favorably with the MMSE. The self-administered feature may promote cognitive testing by busy clinicians prompting earlier diagnosis and treatment.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Siegel, A.J.; Silverman, L.M.; Holman, B.L.
1985-10-01
Elevated cardiac enzyme values in asymptomatic marathon runners after competition can arise from skeletal muscle through exertional rhabdomyolysis, silent injury to the myocardium, or a combined tissue source. Peak post-race levels of the MB isoenzyme of creatine kinase are similar to values in patients with acute myocardial infarction. Previously reported normal results of infarct-avid myocardial scintigraphy with technetium 99m pyrophosphate in runners after competition suggest a non-cardiac source but cannot exclude silent injury to the myocardium. Therefore, thallium 201 myocardial perfusion imaging was performed in runners immediately after competition together with determination of sequential cardiac enzyme levels. Among 15 runnersmore » tested, the average peak in serum MB creatine kinase 24 hours after the race was 128 IU/liter with a cumulative MB creatine kinase release of 117 IU/liter; these values are comparable to those in patients with acute transmural myocardial infarction. Thallium 201 myocardial scintigraphic results were normal in five runners randomly selected from those who volunteered for determination of sequential blood levels. It is concluded that elevations of serum MB creatine kinase in marathon runners arise from a skeletal muscle source and that thallium 201 myocardial scintigraphy is useful to assess runners for myocardial injury when clinical questions arise.« less
Wang, Hui; Wang, Yiping; Zhou, Zhenwei; Wang, Shuo; Yin, Hongyin; Xie, Keqin
2015-06-01
To determine the normal reference value of pyrrole adducts in urine in young people in a university in Shandong, China, and to provide a reliable basis for the clinical diagnosis of n-hexane poisoning. A total of 240 college students were randomly selected. After excluding 32 ineligible students, 208 subjects were included in this study, consisting of 104 males and 104 females, with a mean age of 21?3 years (range: 18 to 24 years). Morning urine was collected from each subject. The content of pyrrole adducts was determined by chromatometry. The content of pyrrole adducts in both male and female obeyed a positively skewed distribution. The median level of pyrrole adducts in male subjects was 0.88 nmol/ml, and the reference value was 0.14-3.92 nmol/ml. The median level of pyrrole adducts in female subjects was 0.93 nmol/ ml, and the reference value was 0.09-3.27 nmol/ml. Student's t test identified no statistical difference in pyrrole adduct level between male and female subjects (t=0.15, P>0.05). The median level of pyrrole adducts in normal young people is 0.91 nmol/ml, and the reference value is 0.11-3.95 nmol/ml.
Adams, James B; Baral, Matthew; Geis, Elizabeth; Mitchell, Jessica; Ingram, Julie; Hensley, Andrea; Zappia, Irene; Newmark, Sanford; Gehn, Eva; Rubin, Robert A; Mitchell, Ken; Bradstreet, Jeff; El-Dahr, Jane
2009-01-01
Background This study investigated the effect of oral dimercapto succinic acid (DMSA) therapy for children with autism spectrum disorders ages 3-8 years. Methods Phase 1 involved 65 children who received one round of DMSA (3 days). Participants who had high urinary excretion of toxic metals were selected to continue on to phase 2. In phase 2, 49 participants were randomly assigned in a double-blind design to receive an additional 6 rounds of either DMSA or placebo. Results DMSA greatly increased the excretion of lead, substantially increased excretion of tin and bismuth, and somewhat increased the excretion of thallium, mercury, antimony, and tungsten. There was some increase in urinary excretion of essential minerals, especially potassium and chromium. The Phase 1 single round of DMSA led to a dramatic normalization of RBC glutathione in almost all cases, and greatly improved abnormal platelet counts, suggesting a significant decrease in inflammation. Conclusion Overall, DMSA therapy seems to be reasonably safe, effective in removing several toxic metals (especially lead), dramatically effective in normalizing RBC glutathione, and effective in normalizing platelet counts. Only 1 round (3 days) was sufficient to improve glutathione and platelets. Additional rounds increased excretion of toxic metals. PMID:19852789
Sheridan, Rebecca; van Rooijen, Maaike; Giles, Oscar; Mushtaq, Faisal; Steenbergen, Bert; Mon-Williams, Mark; Waterman, Amanda
2017-10-01
Mathematics is often conducted with a writing implement. But is there a relationship between numerical processing and sensorimotor 'pen' control? We asked participants to move a stylus so it crossed an unmarked line at a location specified by a symbolic number (1-9), where number colour indicated whether the line ran left-right ('normal') or vice versa ('reversed'). The task could be simplified through the use of a 'mental number line' (MNL). Many modern societies use number lines in mathematical education and the brain's representation of number appears to follow a culturally determined spatial organisation (so better task performance is associated with this culturally normal orientation-the MNL effect). Participants (counter-balanced) completed two consistent blocks of trials, 'normal' and 'reversed', followed by a mixed block where line direction varied randomly. Experiment 1 established that the MNL effect was robust, and showed that the cognitive load associated with reversing the MNL not only affected response selection but also the actual movement execution (indexed by duration) within the mixed trials. Experiment 2 showed that an individual's motor abilities predicted performance in the difficult (mixed) condition but not the easier blocks. These results suggest that numerical processing is not isolated from motor capabilities-a finding with applied consequences.
Dietary pattern among schoolchildren with normal nutritional status in Navarre, Spain.
Durá-Travé, Teodoro; Gallinas-Victoriano, Fidel
2014-04-11
A nutrition survey was carried out (food intake registration of three consecutive school days) in a randomly selected group of 353 schoolchildren (188 males and 165 females) with normal nutritional status. The average age of the surveyed students was 10.5 years (CI 95%: 10.3-11.7). There were no significant differences between both sexes in mean values for calorie intake (males: 2072.7 ± 261.7 and females: 2060.9 ± 250.6) and intake of macronutrients, minerals and vitamins. Cereals (34%), dairy products (19%) and meats (17%) were responsible for approximately 70% of total calorie intake. Protein accounted for 20.3% of energy intake, carbohydrates for 48.8%, total fat for 30.9%, and saturated fat for 12.6%. Cholesterol intake was excessive and over two-thirds of protein intake was from animal sources. The mean intakes of calcium, iodine and Vitamins A, D and E were below recommended levels. The dietary patterns of the schoolchildren with normal nutritional status differed from the Mediterranean diet. Intakes of meat were too high and dairy products and cereals consumption was relatively limited; while that of vegetables; legumes; fruits and fish were insufficient; leading to excessive protein and fat intake from animal sources and insufficient mineral (calcium and iodine) and Vitamins A; D and E intake.
Non-ignorable missingness item response theory models for choice effects in examinee-selected items.
Liu, Chen-Wei; Wang, Wen-Chung
2017-11-01
Examinee-selected item (ESI) design, in which examinees are required to respond to a fixed number of items in a given set, always yields incomplete data (i.e., when only the selected items are answered, data are missing for the others) that are likely non-ignorable in likelihood inference. Standard item response theory (IRT) models become infeasible when ESI data are missing not at random (MNAR). To solve this problem, the authors propose a two-dimensional IRT model that posits one unidimensional IRT model for observed data and another for nominal selection patterns. The two latent variables are assumed to follow a bivariate normal distribution. In this study, the mirt freeware package was adopted to estimate parameters. The authors conduct an experiment to demonstrate that ESI data are often non-ignorable and to determine how to apply the new model to the data collected. Two follow-up simulation studies are conducted to assess the parameter recovery of the new model and the consequences for parameter estimation of ignoring MNAR data. The results of the two simulation studies indicate good parameter recovery of the new model and poor parameter recovery when non-ignorable missing data were mistakenly treated as ignorable. © 2017 The British Psychological Society.
Xu, Cui-Ping; Zhu, Qing-Jun; Song, Jie; Li, Zhen; Zhang, Dan
2013-02-01
To explore the effects of Jingui Shenqi Pill (JSP) on the testis telomerase activity in mice of Shen-yang deficiency syndrome (SYDS). The SYDS model was prepared in 30 mice by over-fatigue and sexual overstrain. They were randomly divided into the model group and the JSP group, 15 in each group. Another 15 normal male mice were selected as the normal group. Mice in the normal group were fed routinely, with distilled water administered intragastrically at the daily dose of 0.1 mL/10 g. Mice in the model group were also administered intragastrically with distilled water at the daily dose of 0.1 mL/10 g while modeling establishment. Mice in the treatment group were administered intragastrically with JSP suspension at 0.1 mL/10 g (the concentration was 0.241 g/mL). The intervention lasted for 4 weeks. Four weeks later, the testis telomerase activity was detected in the three groups by ELISA. The SYDS model was replicated successfully by over-fatigue and sexual overstrain. JSP could improve the signs of mice of SYDS. Compared with the normal group, the activity of testis telomerase decreased in the model group (P < 0.01). Compared with the model group, the testis telomerase activity markedly increased in the treatment group (P < 0.01). The testis telomerase activity in mice of SYDS caused by over-fatigue and sexual overstrain obviously decreased, when compared with that in mice of the normal group. JSP could recover its activity.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
Antypa, Niki; Smelt, August H M; Strengholt, Annette; Van der Does, A J Willem
2012-05-01
Beneficial effects of omega-3 fatty acids have been reported for several psychiatric disorders, particularly for depression. Association studies show a relationship between omega-3 intake and depression risk. Meta-analyses of clinical trials have shown a moderate effect of supplementation on depressive symptoms, but not on normal mood states. Few studies have investigated effects on cognition. The purpose of this study was to examine effects of omega-3 supplements on cognition and mood of recovered depressed individuals. Seventy-one participants were randomized to receive either omega-3 or placebo for four weeks in a randomized double-blind design. Results showed small effects of omega-3 supplementation on aspects of emotional decision-making and on self-reported states of depression and tension. Some of the effects were confounded by learning effects. No significant effects were observed on memory, attention, cognitive reactivity and depressive symptoms. While inconclusive, the present findings may indicate that omega-3 supplementation has selective effects on emotional cognition and mood in recovered depressed participants.
Feeding Vitamin C during Neonatal and Juvenile Growth Improves Learning and Memory of Rats.
Hosseini, Mahmoud; Beheshti, Farimah; Sohrabi, Farzaneh; Vafaee, Farzaneh; Shafei, Mohammad Naser; Reza Sadeghnia, Hamid
2018-09-03
We investigated the effects of feeding vitamin C (Vit C) during neonatal and juvenile growth on learning and memory of rats. Rats after delivery were randomly divided into four groups and treated. Group 1, control group, received normal drinking water. Groups 2-4 received Vit C 10, 100, and 500 mg/kg, respectively, from the first day. After 8 weeks, 10 male offspring of each group were randomly selected and tested in the Morris water maze (MWM) and passive avoidance (PA) tests. Finally, the brains were removed for biochemical measurement. In MWM, 10-500 mg/kg Vit C reduced the latency and traveled distance and increased time spent in the target quadrant. In PA, 10 and 100 mg/kg of Vit C increased the latency; 10-500 mg/kg of Vit C decreased the malondialdehyde (MDA) in the brain tissues and increased thiol and catalase (CAT) activity compared to the control group. We showed that feeding rats Vit C during neonatal and juvenile growth has positive effects on learning and memory.
Engen, Steinar; Saether, Bernt-Erik
2014-03-01
We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
NASA Astrophysics Data System (ADS)
Wang, Xiao; Burghardt, Dirk
2018-05-01
This paper presents a new strategy for the generalization of discrete area features by using stroke grouping method and polarization transportation selection. The mentioned stroke is constructed on derive of the refined proximity graph of area features, and the refinement is under the control of four constraints to meet different grouping requirements. The area features which belong to the same stroke are detected into the same group. The stroke-based strategy decomposes the generalization process into two sub-processes by judging whether the area features related to strokes or not. For the area features which belong to the same one stroke, they normally present a linear like pat-tern, and in order to preserve this kind of pattern, typification is chosen as the operator to implement the generalization work. For the remaining area features which are not related by strokes, they are still distributed randomly and discretely, and the selection is chosen to conduct the generalization operation. For the purpose of retaining their original distribution characteristic, a Polarization Transportation (PT) method is introduced to implement the selection operation. Buildings and lakes are selected as the representatives of artificial area feature and natural area feature respectively to take the experiments. The generalized results indicate that by adopting this proposed strategy, the original distribution characteristics of building and lake data can be preserved, and the visual perception is pre-served as before.
The effect of science learning integrated with local potential to improve science process skills
NASA Astrophysics Data System (ADS)
Rahardini, Riris Riezqia Budy; Suryadarma, I. Gusti Putu; Wilujeng, Insih
2017-08-01
This research was aimed to know the effectiveness of science learning that integrated with local potential to improve student`s science process skill. The research was quasi experiment using non-equivalent control group design. The research involved all student of Muhammadiyah Imogiri Junior High School on grade VII as a population. The sample in this research was selected through cluster random sampling, namely VII B (experiment group) and VII C (control group). Instrument that used in this research is a nontest instrument (science process skill observation's form) adapted Desak Megawati's research (2016). The aspect of science process skills were making observation and communication. The data were using univariat (ANOVA) analyzed at 0,05 significance level and normalized gain score for science process skill increase's category. The result is science learning that integrated with local potential was effective to improve science process skills of student (Sig. 0,00). This learning can increase science process skill, shown by a normalized gain score value at 0,63 (medium category) in experiment group and 0,29 (low category) in control group.
Qualitative analysis of mycotoxins using micellar electrokinetic capillary chromatography
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holland, R.D.; Sepaniak, M.J.
1993-05-01
Naturally occurring mycotoxins are separated using micellar electrokinetic capillary chromatography. Trends in the retention of these toxins, resulting from changes in mobile-phase composition and pH, are reported and presented as a means of alleviating coelution problems. Two sets of mobile-phase conditions are determined that provide unique separation selectivity. The facile manner by which mobile-phase conditions can be altered, without changes in instrumental configuration, allowed the acquisition of two distinctive, fully resolved chromatograms of 10 mycotoxins in a period of approximately 45 min. By adjusting retention times, using indigenous or added components in mycotoxin samples as normalization standards, it is possiblemore » to obtain coefficients of variation in retention time that average less than 1%. The qualitative capabilities of this methodology are evaluated by separating randomly generated mycotoxin-interferent mixtures. In this study, the utilization of normalized retention times applied to separations obtained with two sets of mobile-phase conditions permitted the identification of all the mycotoxins in five unknown samples without any misidentifications. 24 refs., 3 figs., 2 tabs.« less
Interactome Analysis of Microtubule-targeting Agents Reveals Cytotoxicity Bases in Normal Cells.
Gutiérrez-Escobar, Andrés Julián; Méndez-Callejas, Gina
2017-12-01
Cancer causes millions of deaths annually and microtubule-targeting agents (MTAs) are the most commonly-used anti-cancer drugs. However, the high toxicity of MTAs on normal cells raises great concern. Due to the non-selectivity of MTA targets, we analyzed the interaction network in a non-cancerous human cell. Subnetworks of fourteen MTAs were reconstructed and the merged network was compared against a randomized network to evaluate the functional richness. We found that 71.4% of the MTA interactome nodes are shared, which affects cellular processes such as apoptosis, cell differentiation, cell cycle control, stress response, and regulation of energy metabolism. Additionally, possible secondary targets were identified as client proteins of interphase microtubules. MTAs affect apoptosis signaling pathways by interacting with client proteins of interphase microtubules, suggesting that their primary targets are non-tumor cells. The paclitaxel and doxorubicin networks share essential topological axes, suggesting synergistic effects. This may explain the exacerbated toxicity observed when paclitaxel and doxorubicin are used in combination for cancer treatment. Copyright © 2017 The Authors. Production and hosting by Elsevier B.V. All rights reserved.
Magenes, G; Bellazzi, R; Malovini, A; Signorini, M G
2016-08-01
The onset of fetal pathologies can be screened during pregnancy by means of Fetal Heart Rate (FHR) monitoring and analysis. Noticeable advances in understanding FHR variations were obtained in the last twenty years, thanks to the introduction of quantitative indices extracted from the FHR signal. This study searches for discriminating Normal and Intra Uterine Growth Restricted (IUGR) fetuses by applying data mining techniques to FHR parameters, obtained from recordings in a population of 122 fetuses (61 healthy and 61 IUGRs), through standard CTG non-stress test. We computed N=12 indices (N=4 related to time domain FHR analysis, N=4 to frequency domain and N=4 to non-linear analysis) and normalized them with respect to the gestational week. We compared, through a 10-fold crossvalidation procedure, 15 data mining techniques in order to select the more reliable approach for identifying IUGR fetuses. The results of this comparison highlight that two techniques (Random Forest and Logistic Regression) show the best classification accuracy and that both outperform the best single parameter in terms of mean AUROC on the test sets.
The relationship between consanguineous marriage and death in fetus and infants.
Mohammadi, Majid Mehr; Hooman, Heidar Ali; Afrooz, Gholam Ali; Daramadi, Parviz Sharifi
2012-05-01
Given the high prevalence of consanguineous marriages in rural and urban areas of Iran, the aim of this study was to identify its role in increasing fetal and infant deaths. This was a cross-sectional study in which 494 mothers with more than one exceptional child (mentally retarded and physically-dynamically disabled) or with normal children were selected based on multi-stage random sampling method. Data was gathered using the features of parents with more than one exceptional child questionnaire. The validity and reliability of this questionnaire was acceptable. Hierarchical log-linear method was used for statistical analysis. Consanguineous marriage significantly increased the number of births of exceptional children. Moreover, there was a significant relation between the history of fetal/infant death and belonging to the group. There was a significant relation between consanguineous marriage and the history of fetal/infant death which means consanguineous marriage increased the prevalence of fetal/infant death in parents with exceptional children rather than in parents with normal children. The rate of fetal/infant death in exceptional births of consanguineous marriages was higher than that of non-consanguineous marriages.
Long latency postural responses are functionally modified by cognitive set.
Beckley, D J; Bloem, B R; Remler, M P; Roos, R A; Van Dijk, J G
1991-10-01
We examined how cognitive set influences the long latency components of normal postural responses in the legs. We disturbed the postural stability of standing human subjects with sudden toe-up ankle rotations. To influence the subjects' cognitive set, we varied the rotation amplitude either predictably (serial 4 degrees versus serial 10 degrees) or unpredictably (random mixture of 4 degrees and 10 degrees). The subjects' responses to these ankle rotations were assessed from the EMG activity of the tibialis anterior, the medial gastrocnemius, and the vastus lateralis muscles of the left leg. The results indicate that, when the rotation amplitude is predictable, only the amplitude of the long latency (LL) response in tibialis anterior and vastus lateralis varied directly with perturbation size. Furthermore, when the rotation amplitude is unpredictable, the central nervous system selects a default amplitude for the LL response in the tibialis anterior. When normal subjects are exposed to 2 perturbation amplitudes which include the potential risk of falling, the default LL response in tibialis anterior appropriately anticipates the larger amplitude perturbation rather than the smaller or an intermediate one.
Bakes, Katherine; Haukoos, Jason S; Deakyne, Sara J; Hopkins, Emily; Easter, Josh; McFann, Kim; Brent, Alison; Rewers, Arleta
2016-04-01
The optimal rate of fluid administration in pediatric diabetic ketoacidosis (DKA) is unknown. Our aim was to determine whether the volume of fluid administration in children with DKA influences the rate of metabolic normalization. We performed a randomized controlled trial conducted in a tertiary pediatric emergency department from December 2007 until June 2010. The primary outcome was time to metabolic normalization; secondary outcomes were time to bicarbonate normalization, pH normalization, overall length of hospital treatment, and adverse outcomes. Children between 0 and 18 years of age were eligible if they had type 1 diabetes mellitus and DKA. Patients were randomized to receive intravenous (IV) fluid at low volume (10 mL/kg bolus + 1.25 × maintenance rate) or high volume (20 mL/kg bolus + 1.5 × maintenance rate) (n = 25 in each). After adjusting for initial differences in bicarbonate levels, time to metabolic normalization was significantly faster in the higher-volume infusion group compared to the low-volume infusion group (hazard ratio [HR] = 2.0; 95% confidence interval [CI] 1.0-3.9; p = 0.04). Higher-volume IV fluid infusion appeared to hasten, to a greater extent, normalization of pH (HR = 2.5; 95% CI 1.2-5.0; p = 0.01) than normalization of serum bicarbonate (HR = 1.2; 95% CI 0.6-2.3; p = 0.6). The length of hospital treatment HR (0.8; 95% CI 0.4-1.5; p = 0.5) and time to discharge HR (0.8; 95% CI 0.4-1.5; p = 0.5) did not differ between treatment groups. Higher-volume fluid infusion in the treatment of pediatric DKA patients significantly shortened metabolic normalization time, but did not change overall length of hospital treatment. ClinicalTrials.gov ID NCT01701557. Copyright © 2016 Elsevier Inc. All rights reserved.
Valeri, A; Mianné, D; Merouze, F; Bujan, L; Altobelli, A; Masson, J
1993-06-01
Scrotal hyperthermia can induce certain alterations in spermatogenesis. The basal scrotal temperature used to define hyperthermia is usually 33 degrees C. However, no study, conducted according to a strict methodology has validated this mean measurement. We therefore randomly selected 258 men between the ages of 18 and 23 years from a population of 2,000 young French men seen at the National Service Selection Centre in order to measure the scrotal temperature over each testis and in the median raphe in order to determine the mean and median values for these temperatures. For a mean room temperature of 23 +/- 0.5 degrees C with a range of 18 to 31 degrees C, the mean right and left scrotal temperature was 34.2 +/- 0.1 degree C and the mean medioscrotal temperature was 34.4 +/- 0.1 degree C. Scrotal temperature was very significantly correlated to room temperature and its variations. It was therefore impossible to define a normal value for scrotal temperature. Only measurement of scrotal temperature at neutral room temperature, between 21 and 25 degrees C, is able to provide a reference value for scrotal temperature. In this study, the mean scrotal temperature under these conditions was 34.4 +/- 0.2 degree C, i.e. 2.5 degrees C less than body temperature. In the 12.9% of cases with left varicocele, left scrotal temperature was significantly higher than in the absence of varicocele and was also higher than right Scrotal temperature. The authors also determined the dimensions of the testes.(ABSTRACT TRUNCATED AT 250 WORDS)
Epidemiology of diabetes among Arab Americans.
Jaber, Linda A; Brown, Morton B; Hammad, Adnan; Nowak, Sandra N; Zhu, Qian; Ghafoor, Anisa; Herman, William H
2003-02-01
To examine the prevalence of diabetes and glucose intolerance by age and sex in the Arab-American community of Dearborn, Michigan. Participants were randomly selected adult Arab Americans, 20-75 years of age, from randomly selected households in Dearborn, Michigan. Demographic and anthropometric data were recorded. Glucose tolerance was assessed with 2-h 75-g oral glucose tolerance tests and classified according to 1997 American Diabetes Association and 1998 World Health Organization criteria. A total of 626 eligible adults were selected, and 542 participated (87% response rate). Because prevalence increases with age and the overall response rate for women (328/352; 93%) was higher than that for men (214/274; 78%), prevalence rates were adjusted for age and sex. The overall prevalence of diabetes was 15.5% (95% CI 12.2-18.7%) in women and 20.1% (15.0-25.2%) in men (P = 0.13). The prevalence of previously diagnosed diabetes was similar to that of undiagnosed diabetes. Impaired glucose tolerance (IGT) and/or impaired fasting glucose (IFG) were present in 16.8% (12.8-20.8%) of women and 29.7% (23.4-35.9%) of men (P = 0.0007). The combined rates of glucose intolerance (diabetes, IGT, and IFG) were 32.3% (27.8-36.7%) for women and 49.8% (43.1-56.4%) for men (P < 0.0001). Among younger adults, the prevalence in men was higher than that in women. As expected, subjects with diabetes or IGT/IFG were older and had greater BMI and waist-to-hip ratios than subjects with normal glucose tolerance. The prevalence of diabetes and glucose intolerance is extremely high among adult Arab Americans in Michigan and represents a major clinical and public health problem. Community-based intervention programs to prevent and treat diabetes are urgently needed.
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
DOE Office of Scientific and Technical Information (OSTI.GOV)
Webb-Robertson, Bobbie-Jo M.; Matzke, Melissa M.; Jacobs, Jon M.
2011-12-01
Quantification of LC-MS peak intensities assigned during peptide identification in a typical comparative proteomics experiment will deviate from run-to-run of the instrument due to both technical and biological variation. Thus, normalization of peak intensities across a LC-MS proteomics dataset is a fundamental step in pre-processing. However, the downstream analysis of LC-MS proteomics data can be dramatically affected by the normalization method selected . Current normalization procedures for LC-MS proteomics data are presented in the context of normalization values derived from subsets of the full collection of identified peptides. The distribution of these normalization values is unknown a priori. If theymore » are not independent from the biological factors associated with the experiment the normalization process can introduce bias into the data, which will affect downstream statistical biomarker discovery. We present a novel approach to evaluate normalization strategies, where a normalization strategy includes the peptide selection component associated with the derivation of normalization values. Our approach evaluates the effect of normalization on the between-group variance structure in order to identify candidate normalization strategies that improve the structure of the data without introducing bias into the normalized peak intensities.« less
Williams, Leanne M; Korgaonkar, Mayuresh S; Song, Yun C; Paton, Rebecca; Eagles, Sarah; Goldstein-Piekarski, Andrea; Grieve, Stuart M; Harris, Anthony W F; Usherwood, Tim; Etkin, Amit
2015-09-01
Although the cost of poor treatment outcomes of depression is staggering, we do not yet have clinically useful methods for selecting the most effective antidepressant for each depressed person. Emotional brain activation is altered in major depressive disorder (MDD) and implicated in treatment response. Identifying which aspects of emotional brain activation are predictive of general and specific responses to antidepressants may help clinicians and patients when making treatment decisions. We examined whether amygdala activation probed by emotion stimuli is a general or differential predictor of response to three commonly prescribed antidepressants, using functional magnetic resonance imaging (fMRI). A test-retest design was used to assess patients with MDD in an academic setting as part of the International Study to Predict Optimized Treatment in Depression. A total of 80 MDD outpatients were scanned prior to treatment and 8 weeks after randomization to the selective serotonin reuptake inhibitors escitalopram and sertraline and the serotonin-norepinephrine reuptake inhibitor, venlafaxine-extended release (XR). A total of 34 matched controls were scanned at the same timepoints. We quantified the blood oxygen level-dependent signal of the amygdala during subliminal and supraliminal viewing of facial expressions of emotion. Response to treatment was defined by ⩾50% symptom improvement on the 17-item Hamilton Depression Rating Scale. Pre-treatment amygdala hypo-reactivity to subliminal happy and threat was a general predictor of treatment response, regardless of medication type (Cohen's d effect size 0.63 to 0.77; classification accuracy, 75%). Responders showed hypo-reactivity compared to controls at baseline, and an increase toward 'normalization' post-treatment. Pre-treatment amygdala reactivity to subliminal sadness was a differential moderator of non-response to venlafaxine-XR (Cohen's d effect size 1.5; classification accuracy, 81%). Non-responders to venlafaxine-XR showed pre-treatment hyper-reactivity, which progressed to hypo-reactivity rather than normalization post-treatment, and hypo-reactivity post-treatment was abnormal compared to controls. Impaired amygdala activation has not previously been highlighted in the general vs differential prediction of antidepressant outcomes. Amygdala hypo-reactivity to emotions signaling reward and threat predicts the general capacity to respond to antidepressants. Amygdala hyper-reactivity to sad emotion is involved in a specific non-response to a serotonin-norepinephrine reuptake inhibitor. The findings suggest amygdala probes may help inform the personal selection of antidepressant treatments.
Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement
ERIC Educational Resources Information Center
Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.
2009-01-01
Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Wang, Guang Heng; Tan, Tony Xing; Cheah, Charissa S L
We aimed to compare preschool-age Chinese children's weight status based on the WHO guidelines with parental ratings on their children's body type, and child/family demographic characteristics. The sample included 171 preschool-age children (M=60.5months, SD=6.7; boys: 46.8%) randomly selected from 23 classrooms. Based on BMIs from their height and weight from physical examinations, the children were divided into three groups using the 2006 WHO guidelines: underweight (n=46), normal weight (n=65), and overweight (n=60). Data on the parental ratings of children's current body type, ideal body type and child/family demographic characteristics were collected with surveys. Parents' accurately classified 91.1% of the underweight children, 52.3% of the normal weight children, and 61.7% of the overweight children. In terms of ideal body shape for their children, parents typically wanted their children to have normal weight or to remain underweight. Most of the child and family demographic characteristics were not different across children who were underweight, had normal weight, and were overweight. Because parents tended to underestimate their children's weight status, it is important to increase Chinese parents' knowledge on what constitutes healthy weight, as well as the potential harm of overweight status for children's development. Training healthcare providers in kindergartens and pediatric clinics to work with parents to recognize unhealthy weight status in children is valuable. Copyright © 2016 Elsevier Inc. All rights reserved.
[Quantification of acetabular coverage in normal adult].
Lin, R M; Yang, C Y; Yu, C Y; Yang, C R; Chang, G L; Chou, Y L
1991-03-01
Quantification of acetabular coverage is important and can be expressed by superimposition of cartilage tracings on the maximum cross-sectional area of the femoral head. A practical Autolisp program on PC AutoCAD has been developed by us to quantify the acetabular coverage through numerical expression of the images of computed tomography. Thirty adults (60 hips) with normal center-edge angle and acetabular index in plain X ray were randomly selected for serial drops. These slices were prepared with a fixed coordination and in continuous sections of 5 mm in thickness. The contours of the cartilage of each section were digitized into a PC computer and processed by AutoCAD programs to quantify and characterize the acetabular coverage of normal and dysplastic adult hips. We found that a total coverage ratio of greater than 80%, an anterior coverage ratio of greater than 75% and a posterior coverage ratio of greater than 80% can be categorized in a normal group. Polar edge distance is a good indicator for the evaluation of preoperative and postoperative coverage conditions. For standardization and evaluation of acetabular coverage, the most suitable parameters are the total coverage ratio, anterior coverage ratio, posterior coverage ratio and polar edge distance. However, medial coverage and lateral coverage ratios are indispensable in cases of dysplastic hip because variations between them are so great that acetabuloplasty may be impossible. This program can also be used to classify precisely the type of dysplastic hip.
Eucalyptus globulus (Eucalyptus) Treatment of Candidiasis in Normal and Diabetic Rats
Bokaeian, Mohammad; Nakhaee, Alireza; Moodi, Bita; Ali Khazaei, Hossein
2010-01-01
Background: The leaves of Eucalyptus globulus (eucalyptus) are used for treatment of diabetes mellitus in traditional medicine. The aim of this study was to evaluate the effects of eucalyptus in treatment of established systemic infection with Candida albicans in normal and streptozotocin-induced diabetic rats. Methods: Sixty normoglycemic male Wistar rats, weighing 200-250 g, were selected and randomly divided into six groups (n= 10): normal control, control + C. albicans, control + eucalyptus + C. albicans, diabetic control, diabetic + C. albicans, diabetic + eucalyptus + C. albicans. Diabetes was induced after a single intraperitoneal injection of streptozotocin (60 mg/kg body weight) and eucalyptus was added to the diet (62.5 g/kg) and drinking water (2.5 g/L) of treated animals for 4 weeks. The concerned groups were inoculated with C. albicans 15 days after diabetes induction. At the end of one month experiment, fasted rats were killed by cervical decapitation. Blood was collected from neck vein for estimation of glucose. C. albicans concentrations were estimated in liver and kidneys using serial dilution culture of tissue homogenates. Results: Eucalyptus administration significantly improved the hyperglycemia, polydipsia, polyphagia, and it also compensated weight loss of diabetic rats (P<0.05). Moreover, eucalyptus caused a significant reduction in C. albicans concentration in liver and kidney homogenates (P<0.01). Conclusion: The results revealed that eucalyptus improves Candidia infection in normal and diabetic rats that in some ways validates the traditional use of this plant in treatment of diabetic patients. PMID:21079663
Paré, Josianne; Pasquier, Jean-Charles; Lewin, Antoine; Fraser, William; Bureau, Yves-André
2017-05-01
Prolonged labor is a significant cause of maternal and fetal morbidity and very few interventions are known to shorten labor course. Skeletal muscle physiology suggests that glucose supplementation might improve muscle performance in case of prolonged exercise and this situation is analogous to the gravid uterus during delivery. Therefore, it seemed imperative to evaluate the impact of adding carbohydrate supplements on the course of labor. We sought to provide evidence as to whether intravenous glucose supplementation during labor induction in nulliparous women can reduce total duration of active labor. We performed a single-center prospective double-blind randomized controlled trial comparing the use of parental intravenous dextrose 5% with normal saline to normal saline in induced nulliparous women. The study was conducted in a tertiary-level university hospital setting. Participants, caregivers, and those assessing the outcomes were blinded to group assignment. Inclusion criteria were singleton pregnancy at term with cephalic presentation and favorable cervix. Based on blocked randomization, patients were assigned to receive either 250 mL/h of intravenous dextrose 5% with normal saline or 250 mL/h of normal saline for the whole duration of induction, labor, and delivery. The primary outcome studied was the total length of active labor. Secondary outcomes included duration of the active phase of second stage of labor, the mode of delivery, Apgar scores, and arterial cord pH. In all, 100 patients were randomized into each group. A total of 193 patients (96 in the dextrose with normal saline group and 97 in the normal saline group) were analyzed in the study. The median total duration of labor was significantly less in the dextrose with normal saline group (499 vs 423 minutes, P = .024) than in the normal saline group. The probabilities of a woman being delivered at 200 minutes and 450 minutes were 18.8% and 77.1% in the dextrose with normal saline group vs 8.2% and 59.8% in the normal saline group (Kolmogorov-Smirnov test P value = .027). There was no difference in the rate of cesarean delivery, instrumented delivery, Apgar score, or arterial cord pH. Glucose supplementation significantly reduces the total length of labor without increasing the rate of complication in induced nulliparous women. Given the low cost and the safety of this intervention, glucose should be used as the default solute during labor. Copyright © 2017 Elsevier Inc. All rights reserved.
Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?
Adkison, Milo D.
1995-01-01
Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.
Application of Poisson random effect models for highway network screening.
Jiang, Ximiao; Abdel-Aty, Mohamed; Alamili, Samer
2014-02-01
In recent years, Bayesian random effect models that account for the temporal and spatial correlations of crash data became popular in traffic safety research. This study employs random effect Poisson Log-Normal models for crash risk hotspot identification. Both the temporal and spatial correlations of crash data were considered. Potential for Safety Improvement (PSI) were adopted as a measure of the crash risk. Using the fatal and injury crashes that occurred on urban 4-lane divided arterials from 2006 to 2009 in the Central Florida area, the random effect approaches were compared to the traditional Empirical Bayesian (EB) method and the conventional Bayesian Poisson Log-Normal model. A series of method examination tests were conducted to evaluate the performance of different approaches. These tests include the previously developed site consistence test, method consistence test, total rank difference test, and the modified total score test, as well as the newly proposed total safety performance measure difference test. Results show that the Bayesian Poisson model accounting for both temporal and spatial random effects (PTSRE) outperforms the model that with only temporal random effect, and both are superior to the conventional Poisson Log-Normal model (PLN) and the EB model in the fitting of crash data. Additionally, the method evaluation tests indicate that the PTSRE model is significantly superior to the PLN model and the EB model in consistently identifying hotspots during successive time periods. The results suggest that the PTSRE model is a superior alternative for road site crash risk hotspot identification. Copyright © 2013 Elsevier Ltd. All rights reserved.
Generalized energy detector for weak random signals via vibrational resonance
NASA Astrophysics Data System (ADS)
Ren, Yuhao; Pan, Yan; Duan, Fabing
2018-03-01
In this paper, the generalized energy (GE) detector is investigated for detecting weak random signals via vibrational resonance (VR). By artificially injecting the high-frequency sinusoidal interferences into an array of GE statistics formed for the detector, we show that the normalized asymptotic efficacy can be maximized when the interference intensity takes an appropriate non-zero value. It is demonstrated that the normalized asymptotic efficacy of the dead-zone-limiter detector, aided by the VR mechanism, outperforms that of the GE detector without the help of high-frequency interferences. Moreover, the maximum normalized asymptotic efficacy of dead-zone-limiter detectors can approach a quarter of the second-order Fisher information for a wide range of non-Gaussian noise types.
Genotyping Brahman cattle for generalised glycogenosis.
Dennis, J A; Healy, P J; Reichmann, K G
2002-05-01
To develop procedures for genotyping Brahman cattle for loss-of-function alleles within the acidic alpha-glucosidase gene and to assess the risk of generalised glycogenosis in Australian Brahman cattle. PCR assays for three loss-of-function alleles were designed to exploit internal restriction sites within acidic alpha-glucosidase amplicons that are independent of allelic variants at the mutant sites. Genotyping 8529 clinically normal Brahmans between August 1996 and August 2001 revealed 16.4% were heterozygous for the more common of the two mutations (1057deltaTA, often referred to as the 'E7' mutation) that cause generalised glycogenosis in this breed. The less common 1783T mutation (often referred to as the 'E13' mutation) was restricted to descendants of one imported bull, and was not detected in 600 randomly selected Brahmans. Prior to definition of these two disease-causing mutations, 640 (18%), and 14 (0.4%), of 3559 clinically normal Brahmans analysed between January 1994 and December 1996, were heterozygous, and homozygous, respectively, for a silent polymorphism (2223G-->A) that is associated with generalised glycogenosis. In addition to the 1057deltaTA and 1783T mutations, approximately 15% of Brahmans were found to be heterozygous for a single base substitution in exon 9 (1351T, commonly referred to as the 'E9' mutation) that significantly reduces acidic alpha-glucosidase activity, but has not been associated with clinical disease. These three loss-of-function alleles were found in Brahmans imported, or selected for import, from the USA. The PCR procedures reported here represent a significant improvement in reliability and accuracy over previous published methods. Utilisation of these PCR/restriction enzyme based assays will facilitate precise selection against the 1057deltaTA and 1783T alleles, and consequently reduce the incidence of generalised glycogenosis in registered and commercial Brahman herds.
ERIC Educational Resources Information Center
Doerann-George, Judith
The Integrated Moving Average (IMA) model of time series, and the analysis of intervention effects based on it, assume random shocks which are normally distributed. To determine the robustness of the analysis to violations of this assumption, empirical sampling methods were employed. Samples were generated from three populations; normal,…
Lee, Kyoung Soon; Jeong, Hyeon Cheol; Yim, Jong Eun; Jeon, Mi Yang
2016-01-01
Stress is caused when a particular relationship between the individual and the environment emerges. Specifically, stress occurs when an individual's abilities are challenged or when one's well-being is threatened by excessive environmental demands. The aim of this study was to measure the effects of music therapy on stress in university students. Randomized controlled trial. Sixty-four students were randomly assigned to the experimental group (n = 33) or the control group (n = 31). Music therapy. Initial measurement included cardiovascular indicators (blood pressure and pulse), autonomic nervous activity (standard deviation of the normal-to-normal intervals [SDNN], normalized low frequency, normalized high frequency, low/high frequency), and subjective stress. After the first measurement, participants in both groups were exposed to a series of stressful tasks, and then a second measurement was conducted. The experimental group then listened to music for 20 minutes and the control group rested for 20 minutes. A third and final measurement was then taken. There were no significant differences between the two groups in the first or second measurement. However, after music therapy, the experimental group and the control group showed significant differences in all variables, including systolic blood pressure (p = .026), diastolic blood pressure (p = .037), pulse (p < .001), SDNN (p = .003), normalized low frequency (p < .001), normalized high frequency (p = .010), and subjective stress (p = .026). Classical music tends to relax the body and may stimulate the parasympathetic nervous system. These results suggest music therapy as an intervention for stress reduction.
NASA Astrophysics Data System (ADS)
Duan, J.; Lu, X.; He, G.
2017-01-01
In this work, a co-culture system with liver cancer cell line HepG2 and normal cell line L02 is used to investigate the selective effect on cancer and normal cells by plasma activated medium (PAM), which is closer to the real environment where cancer cells develop. Besides, the co-culture system is a better model to study the selective effect than the widely used separate culture systems, where the cancer cell line and normal cell line are cultured independently. By using the co-culture system, it is found that there is an optimum dose of PAM to induce significant cancer cell apoptosis while keeping minimum damage to normal cells.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-04-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
The Central Limit Theorem for Supercritical Oriented Percolation in Two Dimensions
NASA Astrophysics Data System (ADS)
Tzioufas, Achillefs
2018-06-01
We consider the cardinality of supercritical oriented bond percolation in two dimensions. We show that, whenever the the origin is conditioned to percolate, the process appropriately normalized converges asymptotically in distribution to the standard normal law. This resolves a longstanding open problem pointed out to in several instances in the literature. The result applies also to the continuous-time analog of the process, viz. the basic one-dimensional contact process. We also derive general random-indices central limit theorems for associated random variables as byproducts of our proof.
Itô and Stratonovich integrals on compound renewal processes: the normal/Poisson case
NASA Astrophysics Data System (ADS)
Germano, Guido; Politi, Mauro; Scalas, Enrico; Schilling, René L.
2010-06-01
Continuous-time random walks, or compound renewal processes, are pure-jump stochastic processes with several applications in insurance, finance, economics and physics. Based on heuristic considerations, a definition is given for stochastic integrals driven by continuous-time random walks, which includes the Itô and Stratonovich cases. It is then shown how the definition can be used to compute these two stochastic integrals by means of Monte Carlo simulations. Our example is based on the normal compound Poisson process, which in the diffusive limit converges to the Wiener process.
DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel
2015-01-01
In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
Karbasi, Ashraf; Aliannejad, Rasoul; Ghanei, Mostafa; Sanamy, Mehran Noory; Alaeddini, Farshid; Harandi, Ali Amini
2015-07-01
There is no data on the prevalence and the association of gastro esophageal reflux disease (GERD) with toxic fume inhalation. Therefore, we aimed to evaluate the frequency distribution of GERD symptoms among the individuals with mild respiratory disorder due to the past history of toxic fume exposure to sulfur mustard (SM). In a historical cohort study, subjects were randomly selected from 7000 patients in a database of all those who had a history of previous exposure to a single high dose of SM gas during war. The control group was randomly selected from adjacent neighbors of the patients, and two healthy male subjects were chosen per patient. In this study, we used the validated Persian translation of Mayo Gastroesophageal Reflux Questionnaire to assess the frequency distribution of reflux disease. Relative frequency of GERD symptoms, was found to be significantly higher in the inhalation injury patients with an odds ratio of 8.30 (95% confidence interval [CI]: 4.73-14.55), and after adjustment for cigarette smoking, tea consumption, age, and body mass index, aspirin and chronic cough the odds ratio was found to be 4.41 (95% CI: 1.61-12.07). The most important finding of our study was the major GERD symptoms (heartburn and/or acid regurgitation once or more per week) among the individuals with the past history of exposure to SM toxic gas is substantially higher (4.4-fold) than normal populations.
Karbasi, Ashraf; Aliannejad, Rasoul; Ghanei, Mostafa; Sanamy, Mehran Noory; Alaeddini, Farshid; Harandi, Ali Amini
2015-01-01
Background: There is no data on the prevalence and the association of gastro esophageal reflux disease (GERD) with toxic fume inhalation. Therefore, we aimed to evaluate the frequency distribution of GERD symptoms among the individuals with mild respiratory disorder due to the past history of toxic fume exposure to sulfur mustard (SM). Materials and Methods: In a historical cohort study, subjects were randomly selected from 7000 patients in a database of all those who had a history of previous exposure to a single high dose of SM gas during war. The control group was randomly selected from adjacent neighbors of the patients, and two healthy male subjects were chosen per patient. In this study, we used the validated Persian translation of Mayo Gastroesophageal Reflux Questionnaire to assess the frequency distribution of reflux disease. Results: Relative frequency of GERD symptoms, was found to be significantly higher in the inhalation injury patients with an odds ratio of 8.30 (95% confidence interval [CI]: 4.73-14.55), and after adjustment for cigarette smoking, tea consumption, age, and body mass index, aspirin and chronic cough the odds ratio was found to be 4.41 (95% CI: 1.61-12.07). Conclusion: The most important finding of our study was the major GERD symptoms (heartburn and/or acid regurgitation once or more per week) among the individuals with the past history of exposure to SM toxic gas is substantially higher (4.4-fold) than normal populations. PMID:26622251
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Gabere, Musa Nur; Hussein, Mohamed Aly; Aziz, Mohammad Azhar
2016-01-01
Purpose There has been considerable interest in using whole-genome expression profiles for the classification of colorectal cancer (CRC). The selection of important features is a crucial step before training a classifier. Methods In this study, we built a model that uses support vector machine (SVM) to classify cancer and normal samples using Affymetrix exon microarray data obtained from 90 samples of 48 patients diagnosed with CRC. From the 22,011 genes, we selected the 20, 30, 50, 100, 200, 300, and 500 genes most relevant to CRC using the minimum-redundancy–maximum-relevance (mRMR) technique. With these gene sets, an SVM model was designed using four different kernel types (linear, polynomial, radial basis function [RBF], and sigmoid). Results The best model, which used 30 genes and RBF kernel, outperformed other combinations; it had an accuracy of 84% for both ten fold and leave-one-out cross validations in discriminating the cancer samples from the normal samples. With this 30 genes set from mRMR, six classifiers were trained using random forest (RF), Bayes net (BN), multilayer perceptron (MLP), naïve Bayes (NB), reduced error pruning tree (REPT), and SVM. Two hybrids, mRMR + SVM and mRMR + BN, were the best models when tested on other datasets, and they achieved a prediction accuracy of 95.27% and 91.99%, respectively, compared to other mRMR hybrid models (mRMR + RF, mRMR + NB, mRMR + REPT, and mRMR + MLP). Ingenuity pathway analysis was used to analyze the functions of the 30 genes selected for this model and their potential association with CRC: CDH3, CEACAM7, CLDN1, IL8, IL6R, MMP1, MMP7, and TGFB1 were predicted to be CRC biomarkers. Conclusion This model could be used to further develop a diagnostic tool for predicting CRC based on gene expression data from patient samples. PMID:27330311
Haczeyni, Fahrettin; Wang, Hans; Barn, Vanessa; Mridha, Auvro R.; Yeh, Matthew M.; Haigh, W. Geoffrey; Ioannou, George N.; Choi, Yun‐Jung; McWherter, Charles A.; Teoh, Narcissus C.‐H.
2017-01-01
Lipotoxicity associated with insulin resistance is central to nonalcoholic steatohepatitis (NASH) pathogenesis. To date, only weight loss fully reverses NASH pathology, but mixed peroxisome proliferator–activated receptor‐alpha/delta (PPAR‐α/δ) agonists show some efficacy. Seladelpar (MBX‐8025), a selective PPAR‐δ agonist, improves atherogenic dyslipidemia. We therefore used this agent to test whether selective PPAR‐δ activation can reverse hepatic lipotoxicity and NASH in an obese, dyslipidemic, and diabetic mouse model. From weaning, female Alms1 mutant (foz/foz) mice and wild‐type littermates were fed an atherogenic diet for 16 weeks; groups (n = 8‐12) were then randomized to receive MBX‐8025 (10 mg/kg) or vehicle (1% methylcellulose) by gavage for 8 weeks. Despite minimally altering body weight, MBX‐8025 normalized hyperglycemia, hyperinsulinemia, and glucose disposal in foz/foz mice. Serum alanine aminotransferase ranged 300‐600 U/L in vehicle‐treated foz/foz mice; MBX‐8025 reduced alanine aminotransferase by 50%. In addition, MBX‐8025 normalized serum lipids and hepatic levels of free cholesterol and other lipotoxic lipids that were increased in vehicle‐treated foz/foz versus wild‐type mice. This abolished hepatocyte ballooning and apoptosis, substantially reduced steatosis and liver inflammation, and improved liver fibrosis. In vehicle‐treated foz/foz mice, the mean nonalcoholic fatty liver disease activity score was 6.9, indicating NASH; MBX‐8025 reversed NASH in all foz/foz mice (nonalcoholic fatty liver disease activity score 3.13). Conclusion: Seladelpar improves insulin sensitivity and reverses dyslipidemia and hepatic storage of lipotoxic lipids to improve NASH pathology in atherogenic diet–fed obese diabetic mice. Selective PPAR‐δ agonists act independently of weight reduction, but counter lipotoxicity related to insulin resistance, thereby providing a novel therapy for NASH. (Hepatology Communications 2017;1:663–674) PMID:29404484
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
[Measurement of Water COD Based on UV-Vis Spectroscopy Technology].
Wang, Xiao-ming; Zhang, Hai-liang; Luo, Wei; Liu, Xue-mei
2016-01-01
Ultraviolet/visible (UV/Vis) spectroscopy technology was used to measure water COD. A total of 135 water samples were collected from Zhejiang province. Raw spectra with 3 different pretreatment methods (Multiplicative Scatter Correction (MSC), Standard Normal Variate (SNV) and 1st Derivatives were compared to determine the optimal pretreatment method for analysis. Spectral variable selection is an important strategy in spectrum modeling analysis, because it tends to parsimonious data representation and can lead to multivariate models with better performance. In order to simply calibration models, the preprocessed spectra were then used to select sensitive wavelengths by competitive adaptive reweighted sampling (CARS), Random frog and Successive Genetic Algorithm (GA) methods. Different numbers of sensitive wavelengths were selected by different variable selection methods with SNV preprocessing method. Partial least squares (PLS) was used to build models with the full spectra, and Extreme Learning Machine (ELM) was applied to build models with the selected wavelength variables. The overall results showed that ELM model performed better than PLS model, and the ELM model with the selected wavelengths based on CARS obtained the best results with the determination coefficient (R2), RMSEP and RPD were 0.82, 14.48 and 2.34 for prediction set. The results indicated that it was feasible to use UV/Vis with characteristic wavelengths which were obtained by CARS variable selection method, combined with ELM calibration could apply for the rapid and accurate determination of COD in aquaculture water. Moreover, this study laid the foundation for further implementation of online analysis of aquaculture water and rapid determination of other water quality parameters.
Sullivan, Katherine J; Knowlton, Barbara J; Dobkin, Bruce H
2002-05-01
To investigate the effect of practice paradigms that varied treadmill speed during step training with body weight support in subjects with chronic hemiparesis after stroke. Randomized, repeated-measures pilot study with 1- and 3-month follow-ups. Outpatient locomotor laboratory. Twenty-four individuals with hemiparetic gait deficits whose walking speeds were at least 50% below normal. Participants were stratified by locomotor severity based on initial walking velocity and randomly assigned to treadmill training at slow (0.5mph), fast (2.0mph), or variable (0.5, 1.0, 1.5, 2.0mph) speeds. Participants received 20 minutes of training per session for 12 sessions over 4 weeks. Self-selected overground walking velocity (SSV) was assessed at the onset, middle, and end of training, and 1 and 3 months later. SSV improved in all groups compared with baseline (P<.001). All groups increased SSV in the 1-month follow-up (P<.01) and maintained these gains at the 3-month follow-up (P=.77). The greatest improvement in SSV across training occurred with fast training speeds compared with the slow and variable groups combined (P=.04). Effect size (ES) was large between fast compared with slow (ES=.75) and variable groups (ES=.73). Training at speeds comparable with normal walking velocity was more effective in improving SSV than training at speeds at or below the patient's typical overground walking velocity. Copyright 2002 by the American Congress of Rehabilitation Medicine and the American Academy of Physical Medicine and Rehabilitation
Haghgoo, Roza; Asgary, Saeed; Mashhadi Abbas, Fatemeh; Montazeri Hedeshi, Roshanak
2015-01-01
Nano-hydroxyapatite (NHA) has been used for regeneration of osseous defects. Calcium-enriched mixture (CEM) cement is also used for various dental treatments. This trial compared the efficacy of NHA and CEM cement for direct pulp capping (DPC) of sound primary teeth. In this randomized clinical trial with split-mouth design, after attaining informed consent, 20 sound primary canines scheduled for orthodontic extraction, were selected. After mechanical pulp exposure, the exposed site was capped with either NHA or CEM cement and then immediately restored with glass-ionomer and resin composite. The teeth were extracted after two months and examined histologically. Parameters of hard tissue bridge (HTB) formation, its type and quality as well as pulpal inflammation scores were compared between the two experimental groups. The data were analyzed using the Mann Whitney U and Fisher's exact test. The level of significance was set at 0.001. All CEM specimens showed inflammation score of 0 (less than 10%). However, in NHA group, inflammation scores of 0 (less than 10%), 1 (10%-30%) and 2 (30%-50%) were observed in 2 (20%), 4 (40%) and 4 (40%) specimens, respectively (P<0.001). HTB was formed in all CEM specimens while it was developed in 2 specimens of NHA (20%; P<0.001). All CEM specimens showed normal pulp; only two cases in NHA group (20%) demonstrated uninflamed normal pulp. CEM cement was superior to NHA as a DPC agent in terms of HTB formation and pulp inflammation scores. It is a suitable material for the DPC of primary teeth.
Li, Wenhua; Zhao, Xiaoying; Wang, Huijie; Liu, Xin; Zhao, Xinmin; Huang, Mingzhu; Qiu, Lixin; Zhang, Wen; Chen, Zhiyu; Guo, Weijian; Li, Jin; Zhu, Xiaodong
2017-06-06
Maintenance therapy proves to be effective in advanced lung and breast cancer after initial chemotherapy. The purpose of this phase II study was to evaluate the efficacy and safety of Uracil and Tegafur (UFT) maintenance in metastatic gastric cancer patients following the first-line fluorouracil-based chemotherapy. Metastatic gastric cancer patients with stable disease or a better response after the completion of first-line chemotherapy were randomized to oral UFT (360mg/m2 × 2 weeks) every 3 weeks until disease progression/intolerable toxicity or to observation (OBS). The primary endpoint was progression-free survival (PFS); the secondary endpoints were overall survival (OS) and safety. The trial was closed after the interim analysis of the 58 enrolled (120 planned) patients. Median PFS was not improved in the UFT group compared with the OBS group (3.2 months versus 3.6 months, P = 0.752), as well as the median OS (14.2 months for both, P = 0.983). However, subgroup analysis showed that low baseline hemoglobin (< 120 g/L) was associated with poorer PFS with maintenance therapy (P = 0.032), while the normal hemoglobin patients benefit from the UFT treatment (P = 0.008). Grade 3 to 4 toxicities in the UFT group were anemia (3.4%), thrombocytopenia (3.4%) and diarrhea (6.9%). This trial did not show superiority of UFT maintenance in non-selected patients responding to fluorouracil-based first-line chemotherapy. The normal hemoglobin level at baseline is a predictive biomarker for favorable patient subsets from the maintenance treatment.
Psilocybin impairs high-level but not low-level motion perception.
Carter, Olivia L; Pettigrew, John D; Burr, David C; Alais, David; Hasler, Felix; Vollenweider, Franz X
2004-08-26
The hallucinogenic serotonin(1A&2A) agonist psilocybin is known for its ability to induce illusions of motion in otherwise stationary objects or textured surfaces. This study investigated the effect of psilocybin on local and global motion processing in nine human volunteers. Using a forced choice direction of motion discrimination task we show that psilocybin selectively impairs coherence sensitivity for random dot patterns, likely mediated by high-level global motion detectors, but not contrast sensitivity for drifting gratings, believed to be mediated by low-level detectors. These results are in line with those observed within schizophrenic populations and are discussed in respect to the proposition that psilocybin may provide a model to investigate clinical psychosis and the pharmacological underpinnings of visual perception in normal populations.
Swaney, Kristen F.; Huang, Chuan-Hsiang; Devreotes, Peter N.
2015-01-01
Chemotaxis, the directed migration of cells in chemical gradients, is a vital process in normal physiology and in the pathogenesis of many diseases. Chemotactic cells display motility, directional sensing, and polarity. Motility refers to the random extension of pseudopodia, which may be driven by spontaneous actin waves that propagate through the cytoskeleton. Directional sensing is mediated by a system that detects temporal and spatial stimuli and biases motility toward the gradient. Polarity gives cells morphologically and functionally distinct leading and lagging edges by relocating proteins or their activities selectively to the poles. By exploiting the genetic advantages of Dictyostelium, investigators are working out the complex network of interactions between the proteins that have been implicated in the chemotactic processes of motility, directional sensing, and polarity. PMID:20192768
Tantilipikorn, Pongsakorn; Tunsuriyawong, Prayuth; Jareoncharsri, Perapun; Bedavanija, Anan; Assanasen, Paraya; Bunnag, Chaweewan; Metheetrairut, Choakchai
2012-01-01
To assess the efficacy of dexpanthenol nasal spray compared with normal saline spray in the postoperative treatment of patients with chronic rhinosinusitis (CRS) who underwent endoscopic sinus surgery (ESS). A prospective, randomized controlled study was conducted in CRS patients who underwent ESS. The enrolled patients had never been operated intranasally. These patients received either dexpanthenol or normal saline nasal spray intranasally four times a day for six weeks post-operatively. Fifty CRS patients were recruited in the present study. Age ranged from 23 to 63 years (means 43.4 +/- 11.2 years). Forty-four percent of patients were diagnosed as CRS without nasal polyps (NP) (CRSs NP) and 56% were CRS with NP (CRSw NP). Twenty-five cases were randomly assigned to use dexpanthenol nasal spray whereas the other 25 cases used normal saline nasal spray. The preoperative severity of CRS, determined by the computerized tomography (CT) scan scoring system of Lund-McKay was 13.9 +/- 6.2 in the dexpanthenol group and 13.6 +/- 6.9 in the normal saline group, which were not statistically different (p > 0.05). The endoscopic scoring was 10.2 +/- 2 in the dexpanthenol group and 10.7 +/- 3 in the normal saline group, which were not statistically different (p > 0.05). The mucociliary transit time improvement (time difference between pre- and post-treatment by nasal spray) was 8.4 +/- 3.3 minutes in the dexpanthenol group and 1.7 +/- 1.2 minutes in the normal saline group, which were statistically different (p < 0.05). The majority of the postoperative symptom scores and all of the endoscopic scores of the dexpanthenol group were not statistically different from those of the normal saline group. However, dexpanthenol nasal spray has superior efficacy compared with normal saline nasal spray on improvement of mucociliary clearance and nasal discharge in the postoperative care of CRS patients after ESS.
A case study on dual forms of malnutrition among selected households in District 1, Tondo, Manila.
Angeles-Agdeppa, Imelda; Lana, Ruby D; Barba, Corazon V C
2003-01-01
The co-existence of under-and overnutrition in developing countries might be the resultant factor of a marked shift in dietary and lifestyle practices of people, especially in urban areas. The eating of high fat, high caloric diets, the presence of inactive entertainment devices and mechanized labour influence patterns of food demand and physical activity. This study identified factors associated with the occurrence of under/overweight or normal/normal nutritional status of child-mother pairs in one household. This study was conducted in two phases. The first phase was a survey of 376 child-mother pairs. The children aged 33-83 months were attending classes in government day care centres. Anthropometric indices: weight-for-age Z score (WAZ) < or = -2SD was used to classify underweight in children. WAZ+1 to -1SD was used to indicate normal nutritional status in children, specifically for this study, in order to establish a more homogenous group. Body mass index (BMI) > or = 25 kg/m2 was used to measure overweight among mothers. Results showed that about 59% of the child-mother pairs were suffering from two different types of malnutrition. From this, 31 (8.2%) child-mother pairs in the same household were experiencing underweight/overweight: the child was underweight and the mother was overweight. The second phase of the study was an in-depth study of these 31 under/overweight child-mother pairs and 30 randomly selected normal/normal pairs. Pre-tested questionnaires were used to gather socio-economic-demographic data; 3 day 24-h food recall for dietary intake and 24-h activity recall for physical activity. Results showed that the different factors associated with the existence of underweight child/overweight mother (UC/OM) or normal child/normal mother (NC/NM) in this study were: mother's educational level, mother's occupation, and number of children in the household; energy intake, the preference of meats, sweets and sugars among children or meats and fried foods among mothers; and mother's perception on body size. Physical activity of both mothers and children was higher in the UC/OM than in the NC/NM group. The problem of undernutrition and overnutrition in one household poses enormous challenges. Although this study cannot make an inference to the whole population, the results indicate that there is a need to consider whether public health programs should focus on healthy diet and lifestyle patterns that will lead to optimal health outcomes at both ends of the spectrum of nutritional status.
Conroy, Ronan M; Golden, Jeannette; Jeffares, Isabelle; O'Neill, Desmond; McGee, Hannah
2010-08-01
In this study, we use data from a population survey of persons aged 65 and over living in the Irish Republic to examine the relationship of cognitive impairment, assessed using the Abbreviated Mental Test, with loneliness, boredom-proneness, social relations, and depression. Participants were randomly selected community-dwelling Irish people aged 65+ years. An Abbreviated Mental Test score of 8 or 9 out of 10 was classified as 'low normal', and a score of less than 8 as 'possible cognitive impairment'. We used clustering around latent variables analysis (CLV) to identify families of variables associated with reduced cognitive function. The overall prevalence of possible cognitive impairment was 14.7% (95% CI 12.4-17.3%). Low normal scores had a prevalence of 30.5% (95% CI 27.2-33.7%). CLV analysis identified three groups of predictors: 'Low social support' (widowed, living alone, low social support), 'personal cognitive reserve' (low social activity, no leisure exercise, never having married, loneliness and boredom-proneness), and 'sociodemographic cognitive reserve' (primary education, rural domicile). In multivariate analysis, both cognitive reserve clusters, but not social support, were independently associated with cognitive function. Loneliness and boredom-proneness are associated with reduced cognitive function in older age, and cluster with other factors associated with cognitive reserve. Both may have a common underlying mechanism in the failure to select and maintain attention on particular features of the social environment (loneliness) or the non-social environment (boredom-proneness).
Metabolic Syndrome as a Cardiovascular Disease Risk Factor: Patients Evaluated in Primary Care
Cabré, Joan-Josep; Martín, Francisco; Costa, Bernardo; Piñol, Josep L; Llor, Josep L; Ortega, Yolanda; Basora, Josep; Baldrich, Marta; Solà, Rosa; Daniel, Jordi; Hernández, Josep Ma; Saumell, Judit; Bladé, Jordi; Sagarra, Ramon; Basora, Teresa; Montañés, Dolors; Frigola, Joan L; Donado-Mazarrón, Angel; García-Vidal, Maria Teresa; Sánchez-Oro, Isabel; de Magriñà, Josep M; Urbaneja, Ana; Barrio, Francisco; Vizcaíno, Jesús; Sabaté, Josep M; Pascual, Irene; Revuelta, Vanesa
2008-01-01
To estimate the prevalence of metabolic syndrome (MS) in a population receiving attention in primary care centers (PCC) we selected a random cohort of ostensibly normal subjects from the registers of 5 basic-health area (BHA) PCC. Diagnosis of MS was with the WHO, NCEP and IDF criteria. Variables recorded were: socio-demographic data, CVD risk factors including lipids, obesity, diabetes, blood pressure and smoking habit and a glucose tolerance test outcome. Of the 720 individuals selected (age 60.3 ± 11.5 years), 431 were female, 352 hypertensive, 142 diabetic, 233 pre-diabetic, 285 obese, 209 dyslipemic and 106 smokers. CVD risk according to the Framingham and REGICOR calculation was 13.8 ± 10% and 8.8 ± 9.8%, respectively. Using the WHO, NCEP and IDF criteria, MS was diagnosed in 166, 210 and 252 subjects, respectively and the relative risk of CVD complications in MS subjects was 2.56. Logistic regression analysis indicated that the MS components (WHO set), the MS components (IDF set) and the female gender had an increased odds ratio for CVD of 3.48 (95CI%: 2.26–5.37), 2.28 (95%CI: 1.84–4.90) and 2.26 (95%CI: 1.48–3.47), respectively. We conclude that MS and concomitant CVD risk is high in ostensibly normal population attending primary care clinics, and this would necessarily impinge on resource allocation in primary care. PMID:18647383
A maximally selected test of symmetry about zero.
Laska, Eugene; Meisner, Morris; Wanderling, Joseph
2012-11-20
The problem of testing symmetry about zero has a long and rich history in the statistical literature. We introduce a new test that sequentially discards observations whose absolute value is below increasing thresholds defined by the data. McNemar's statistic is obtained at each threshold and the largest is used as the test statistic. We obtain the exact distribution of this maximally selected McNemar and provide tables of critical values and a program for computing p-values. Power is compared with the t-test, the Wilcoxon Signed Rank Test and the Sign Test. The new test, MM, is slightly less powerful than the t-test and Wilcoxon Signed Rank Test for symmetric normal distributions with nonzero medians and substantially more powerful than all three tests for asymmetric mixtures of normal random variables with or without zero medians. The motivation for this test derives from the need to appraise the safety profile of new medications. If pre and post safety measures are obtained, then under the null hypothesis, the variables are exchangeable and the distribution of their difference is symmetric about a zero median. Large pre-post differences are the major concern of a safety assessment. The discarded small observations are not particularly relevant to safety and can reduce power to detect important asymmetry. The new test was utilized on data from an on-road driving study performed to determine if a hypnotic, a drug used to promote sleep, has next day residual effects. Copyright © 2012 John Wiley & Sons, Ltd.
Thurison, Tine; Christensen, Ib J; Lund, Ida K; Nielsen, Hans J; Høyer-Hansen, Gunilla
2015-01-15
High levels of circulating forms of the urokinase-type plasminogen activator receptor (uPAR) are significantly associated to poor prognosis in cancer patients. Our aim was to determine biological variations and reference intervals of the uPAR forms in blood, and in addition, to test the clinical relevance of using these as cut-points in colorectal cancer (CRC) prognosis. uPAR forms were measured in citrated and EDTA plasma samples using time-resolved fluorescence immunoassays. Diurnal, intra- and inter-individual variations were assessed in plasma samples from cohorts of healthy individuals. Reference intervals were determined in plasma from healthy individuals randomly selected from a Danish multi-center cross-sectional study. A cohort of CRC patients was selected from the same cross-sectional study. The reference intervals showed a slight increase with age and women had ~20% higher levels. The intra- and inter-individual variations were ~10% and ~20-30%, respectively and the measured levels of the uPAR forms were within the determined 95% reference intervals. No diurnal variation was found. Applying the normal upper limit of the reference intervals as cut-point for dichotomizing CRC patients revealed significantly decreased overall survival of patients with levels above this cut-point of any uPAR form. The reference intervals for the different uPAR forms are valid and the upper normal limits are clinically relevant cut-points for CRC prognosis. Copyright © 2014 Elsevier B.V. All rights reserved.
Ability of Cirrus™ HD-OCT Optic Nerve Head Parameters to Discriminate Normal from Glaucomatous Eyes
Mwanza, Jean-Claude; Oakley, Jonathan D; Budenz, Donald L; Anderson, Douglas R
2010-01-01
Purpose To determine the ability of optic nerve head (ONH) parameters measured with spectral domain Cirrus™ HD-OCT to discriminate between normal and glaucomatous eyes and to compare them to the discriminating ability of peripapillary retinal nerve fiber layer (RNFL) thickness measurements performed with Cirrus™ HD-OCT. Design Evaluation of diagnostic test or technology. Participants Seventy-three subjects with glaucoma and one hundred and forty-six age-matched normal subjects. Methods Peripapillary ONH parameters and RNFL thickness were measured in one randomly selected eye of each participant within a 200×200 pixel A-scan acquired with Cirrus™ HD-OCT centered on the ONH. Main Outcome Measures ONH topographic parameters, peripapillary RNFL thickness, and the area under receiver operating characteristic curves (AUCs). Results For distinguishing normal from glaucomatous eyes, regardless of disease stage, the six best parameters (expressed as AUC) were vertical rim thickness (VRT, 0.963), rim area (RA, 0.962), RNFL thickness at clock-hour 7 (0.957), RNFL thickness of the inferior quadrant (0.953), vertical cup-to-disc ratio (VCDR, 0.951) and average RNFL thickness (0.950). The AUC for distinguishing between normal and eyes with mild glaucoma was greatest for RNFL thickness of clock-hour 7 (0.918), VRT (0.914), RA (0.912), RNFL thickness of inferior quadrant (0.895), average RNFL thickness (0.893) and VCDR (0.890). There were no statistically significant differences between AUCs for the best ONH parameters and RNFL thickness measurements (p > 0.05). Conclusions Cirrus™ HD-OCT ONH parameters are able to discriminate between eyes that are normal from those with glaucoma or even mild glaucoma. There is no difference in the ability of ONH parameters and RNFL thickness measurement, as measured with Cirrus™ OCT, to distinguish between normal and glaucomatous eyes. PMID:20920824
Method for distinguishing normal and transformed cells using G1 kinase inhibitors
Crissman, Harry A.; Gadbois, Donna M.; Tobey, Robert A.; Bradbury, E. Morton
1993-01-01
A G.sub.1 phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G.sub.1 phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G.sub.1 cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G.sub.1 phase, suggesting that such G.sub.1 phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.
Method for distinguishing normal and transformed cells using G1 kinase inhibitors
Crissman, H.A.; Gadbois, D.M.; Tobey, R.A.; Bradbury, E.M.
1993-02-09
A G[sub 1] phase kinase inhibitor is applied in a low concentration to a population of normal and transformed mammalian cells. The concentration of G[sub 1] phase kinase inhibitor is selected to reversibly arrest normal mammalian cells in the G[sub 1] cell cycle without arresting growth of transformed cells. The transformed cells may then be selectively identified and/or cloned for research or diagnostic purposes. The transformed cells may also be selectively killed by therapeutic agents that do not affect normal cells in the G[sub 1] phase, suggesting that such G[sub 1] phase kinase inhibitors may form an effective adjuvant for use with chemotherapeutic agents in cancer therapy for optimizing the killing dose of chemotherapeutic agents while minimizing undesirable side effects on normal cells.
THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.
ERIC Educational Resources Information Center
WELCH, WAYNE W.; AND OTHERS
MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes
NASA Astrophysics Data System (ADS)
Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew
Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.
Pulse oximetry in the evaluation of peripheral vascular disease.
Jawahar, D; Rachamalla, H R; Rafalowski, A; Ilkhani, R; Bharathan, T; Anandarao, N
1997-08-01
The role of pulse oximetry in the evaluation of peripheral vascular disease (PVD) was investigated. In addition, the value of elevating the limb to improve the sensitivity of detection of PVD by the pulse oximeter was also determined. Pulse oximetry reading in the toes were obtained in 40 young, healthy volunteers and in 40 randomly selected patients referred to the vascular investigation laboratory over a period of two months. All 40 healthy volunteers had normal pulse oximetry readings. Normal pulse oximetry reading in the toes was defined as > 95% O2 Sat and +/-2 of finger pulse oximetry reading. In all 40 patients, pulse oximetry readings were either normal or not detected at all. Since there was no gradation in decrease in the pulse oximetry reading with severity of disease or with elevation of the patient's lower extremity, an absent or no reading was considered as an abnormal result from the test. The frequency of abnormal pulse oximetry readings increased significantly in groups with abnormal ankle-brachial pressure index (ABPI) and also varied significantly with elevation of the patients' lower limbs. In patients with no PVD detected by Doppler (ABPI > 0.9), pulse oximetry readings were normal in all. However, in patients with moderate PVD (ABPI, 0.5-0.9), 84% of the patients' lower limbs had normal pulse oximetry readings and 16% had an abnormal reading at baseline level (flat). An additional 12% of the lower limbs in this group had an abnormal reading on elevation of the limb to 12 inches. In patients with severe PVD (ABPI < 0.5), 54% of the patients' lower limbs had an abnormal reading at baseline and an additional 23% had an abnormal reading at elevation of the limb to 12 inches. In conclusion, pulse oximetry was not a sensitive test for detecting early PVD.
Vázquez, Beatriz Y Salazar; Vázquez, Miguel A Salazar; Jáquez, Manuel Guajardo; Huemoeller, Antonio H Bracho; Intaglietta, Marcos; Cabrales, Pedro
2010-01-01
To determine the relationship between mean arterial blood pressure (MAP) and blood viscosity in diabetic type 1 children and healthy controls to investigate whether MAP is independent of blood viscosity in healthy children, and vice versa. Children with diabetes type 1 treated by insulin injection were studied. Controls were healthy children of both sexes. MAP was calculated from systolic and diastolic pressure measurements. Blood viscosity was determined indirectly by measuring blood hemoglobin (Hb) content. The relationship between Hb, hematocrit (Hct) and blood viscosity was determined in a subgroup of controls and diabetics selected at random. 21 (10.6+/-2.5 years) type 1 diabetic children treated with insulin and 25 healthy controls age 9.6+/-1.7 years were studied. Hb was 13.8+/-0.8 g/dl in normal children vs. 14.3+/-0.9 g/dl in the diabetic group (p<0.05). MAP was 71.4+/-8.2 in the normal vs. 82.9+/-7.2 mmHg in the diabetic group (p<0.001). Glucose was 89.3+/-10.6 vs. 202.4+/-87.4 mg/dl respectively. Diabetics had a positive MAP/Hb correlation (p=0.007), while normals showed a non significant (p=0.2) negative correlation. The blood viscosity/Hb relationship was studied in a subgroup of 8 healthy controls and 8 diabetic type 1 children. There was no significant difference in Hb and Hct between groups. Diabetics showed a trend of increasing blood viscosity (+7%, p=0.15). Normal children compensate for the increase in vascular resistance due to increased blood viscosity (increased Hb and Hct) while diabetic children do not, probably due to endothelial dysfunction.
Refractive errors and ocular biometry components in thalassemia major patients.
Heydarian, Samira; Jafari, Reza; Karami, Hosein
2016-04-01
The aim of this study is to determine and compare biometric and refractive characteristics of thalassemia major patients and normal individuals. In this cross-sectional study, 54 thalassemia major patients were selected randomly as case group, and 54 age- and sex-matched healthy subjects were regarded as control group. Refractive errors, corneal curvature and ocular components were measured by autokeratorefractometery and A-scan ultrasonography, respectively. Mean spherical equivalent was -0.0093 ± 0.86 D in thalassemia patients and -0.22 ± 1.33 D in the normal group. The prevalence of myopia, Hyperopia, and emmetropia among thalassemia patients was 16.7, 19.4, and 63.9 %, respectively. While in the control group, 26.9 % were myopic, 25 % were hyperopic, and 48.1 % were emmetropic. The prevalence of astigmatism in case group was 22.2 %, which was not significantly different from that in control group, (27.8 %, p = 0.346). Mean axial length in thalassemia patients was 22.89 ± 0.70 which was significantly lower than that in normal group (23.37 ± 0.91, p = 0.000). The flattest meridian of the cornea (R1) was significantly steeper in thalassemia patients (7.77 ± 0.24) in comparison to normal individuals (7.85 ± 0.28). Although thalassemic patients had significantly smaller axial length and vitreous chamber depth in comparison to normal group, which could be due to their abnormal physical growth, there was no significant difference between the mean of spherical equivalent among two groups. This can be due to their steeper corneal curvature that overcomes the refractive disadvantage of their shorter axial length.
[Oral health status of women with normal and high-risk pregnancies].
Chaloupka, P; Korečko, V; Turek, J; Merglová, V
2014-01-01
The aim of this study was to compare the oral health status of women with normal pregnancies and those with high-risk pregnancies. A total of 142 women in the third trimester of pregnancy were randomly selected for this study. The pregnant women were divided into two groups: a normal pregnancy group (group F, n = 61) and a high-risk pregnancy group (group R, n = 81). The following variables were recorded for each woman: age, general health status, DMF index, CPITN index, PBI index, amounts of Streptococcus mutans in the saliva and dental treatment needs. The data obtained were analysed statistically. The Mann-Whitney test, Kruskal-Wallis test and chi square test were used, and p-values less than 0.05 were considered statistically significant. The two-sided t-test was used to compare the two cohorts. Women with high-risk pregnancies showed increased values in all measured indices and tests, but there were no statistically significant differences between the two groups in the DMF index, CPITN index and amounts of Streptococcus mutans present in the saliva. Statistically significant differences were detected between the two groups for the PBI index and dental treatment needs. In group F, the maximum PBI index value was 2.9 in group F and 3.8 in group R. Significant differences were found also in mean PBI values. Out of the entire study cohort, 94 women (66.2%) required dental treatment, including 52% (n = 32) of the women with normal pregnancies and 77% (n = 62) of the women with high-risk pregnancies. This study found that women with complications during pregnancy had severe gingivitis and needed more frequent dental treatment than women with normal pregnancies.
A STUDY OF COGNITIVE DEVELOPMENT AND PERFORMANCE IN CHILDREN WITH NORMAL AND DEFECTIVE HEARING.
ERIC Educational Resources Information Center
TEMPLIN, MILDRED C.
A COMPARATIVE, LONGITUDINAL STUDY WAS CONDUCTED TO EXAMINE SPECIFIC PERFORMANCE CHARACTERISTICS OF DEAF AND NORMAL CHILDREN ON SELECTED COGNITIVE TASKS. THE SAMPLE, DISTRIBUTED INTO 3 AGE CATEGORIES, CONSISTED OF 72 NORMAL AND 60 DEAF CHILDREN. MEASURES WERE SELECTED TO ASSESS THE PERFORMANCE OF SUBJECTS (1) IN DIFFERENT AREAS OF COGNITION, (2) BY…
Alternate methods for FAAT S-curve generation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kaufman, A.M.
The FAAT (Foreign Asset Assessment Team) assessment methodology attempts to derive a probability of effect as a function of incident field strength. The probability of effect is the likelihood that the stress put on a system exceeds its strength. In the FAAT methodology, both the stress and strength are random variables whose statistical properties are estimated by experts. Each random variable has two components of uncertainty: systematic and random. The systematic uncertainty drives the confidence bounds in the FAAT assessment. Its variance can be reduced by improved information. The variance of the random uncertainty is not reducible. The FAAT methodologymore » uses an assessment code called ARES to generate probability of effect curves (S-curves) at various confidence levels. ARES assumes log normal distributions for all random variables. The S-curves themselves are log normal cumulants associated with the random portion of the uncertainty. The placement of the S-curves depends on confidence bounds. The systematic uncertainty in both stress and strength is usually described by a mode and an upper and lower variance. Such a description is not consistent with the log normal assumption of ARES and an unsatisfactory work around solution is used to obtain the required placement of the S-curves at each confidence level. We have looked into this situation and have found that significant errors are introduced by this work around. These errors are at least several dB-W/cm{sup 2} at all confidence levels, but they are especially bad in the estimate of the median. In this paper, we suggest two alternate solutions for the placement of S-curves. To compare these calculational methods, we have tabulated the common combinations of upper and lower variances and generated the relevant S-curves offsets from the mode difference of stress and strength.« less
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Effects of ignition location models on the burn patterns of simulated wildfires
Bar-Massada, A.; Syphard, A.D.; Hawbaker, T.J.; Stewart, S.I.; Radeloff, V.C.
2011-01-01
Fire simulation studies that use models such as FARSITE often assume that ignition locations are distributed randomly, because spatially explicit information about actual ignition locations are difficult to obtain. However, many studies show that the spatial distribution of ignition locations, whether human-caused or natural, is non-random. Thus, predictions from fire simulations based on random ignitions may be unrealistic. However, the extent to which the assumption of ignition location affects the predictions of fire simulation models has never been systematically explored. Our goal was to assess the difference in fire simulations that are based on random versus non-random ignition location patterns. We conducted four sets of 6000 FARSITE simulations for the Santa Monica Mountains in California to quantify the influence of random and non-random ignition locations and normal and extreme weather conditions on fire size distributions and spatial patterns of burn probability. Under extreme weather conditions, fires were significantly larger for non-random ignitions compared to random ignitions (mean area of 344.5 ha and 230.1 ha, respectively), but burn probability maps were highly correlated (r = 0.83). Under normal weather, random ignitions produced significantly larger fires than non-random ignitions (17.5 ha and 13.3 ha, respectively), and the spatial correlations between burn probability maps were not high (r = 0.54), though the difference in the average burn probability was small. The results of the study suggest that the location of ignitions used in fire simulation models may substantially influence the spatial predictions of fire spread patterns. However, the spatial bias introduced by using a random ignition location model may be minimized if the fire simulations are conducted under extreme weather conditions when fire spread is greatest. ?? 2010 Elsevier Ltd.
Power of tests of normality for detecting contaminated normal samples
DOE Office of Scientific and Technical Information (OSTI.GOV)
Thode, H.C. Jr.; Smith, L.A.; Finch, S.J.
1981-01-01
Seventeen tests of normality or goodness of fit were evaluated for power at detecting a contaminated normal sample. This study used 1000 replications each of samples of size 12, 17, 25, 33, 50, and 100 from six different contaminated normal distributions. The kurtosis test was the most powerful over all sample sizes and contaminations. The Hogg and weighted Kolmogorov-Smirnov tests were second. The Kolmogorov-Smirnov, chi-squared, Anderson-Darling, and Cramer-von-Mises tests had very low power at detecting contaminated normal random variables. Tables of the power of the tests and the power curves of certain tests are given.
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Effect of Expanding Medicaid for Parents on Children’s Health Insurance Coverage
DeVoe, Jennifer E.; Marino, Miguel; Angier, Heather; O’Malley, Jean P.; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J.; Bailey, Steffani R.; Gallia, Charles; Gold, Rachel
2016-01-01
IMPORTANCE In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon’s randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. OBJECTIVE To estimate the effect on a child’s health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. DESIGN, SETTING, AND PARTICIPANTS Oregon Experiment randomized natural experiment assessing the results of Oregon’s 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child’s Medicaid or Children’s Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children’s coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14 409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. EXPOSURES For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. MAIN OUTCOMES AND MEASURES Children’s Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent’s selection date. RESULTS In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent’s selection compared with children whose parents were not selected (adjusted odds ratio [AOR] = 1.18; 95% CI, 1.10–1.27). The effect remained significant during months 7 to 12 (AOR = 1.11; 95% CI, 1.03–1.19); months 13 to 18 showed a positive but not significant effect (AOR = 1.07; 95% CI, 0.99–1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR = 2.37; 95% CI, 2.14–2.64). CONCLUSIONS AND RELEVANCE Children’s odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents’ access to Medicaid coverage and their children’s coverage. PMID:25561041
NASA Astrophysics Data System (ADS)
Wang, Yu; Fan, Jie; Xu, Ye; Sun, Wei; Chen, Dong
2017-06-01
Effective application of carbon capture, utilization and storage (CCUS) systems could help to alleviate the influence of climate change by reducing carbon dioxide (CO2) emissions. The research objective of this study is to develop an equilibrium chance-constrained programming model with bi-random variables (ECCP model) for supporting the CCUS management system under random circumstances. The major advantage of the ECCP model is that it tackles random variables as bi-random variables with a normal distribution, where the mean values follow a normal distribution. This could avoid irrational assumptions and oversimplifications in the process of parameter design and enrich the theory of stochastic optimization. The ECCP model is solved by an equilibrium change-constrained programming algorithm, which provides convenience for decision makers to rank the solution set using the natural order of real numbers. The ECCP model is applied to a CCUS management problem, and the solutions could be useful in helping managers to design and generate rational CO2-allocation patterns under complexities and uncertainties.
Vitamin-mineral intake and intelligence: a macrolevel analysis of randomized controlled trials.
Schoenthaler, S J; Bier, I D
1999-04-01
Two independent groups suspected that poor diets in school children might impair intelligence. Because dietary changes produce psychological effects, both groups conducted randomized trials in which children were challenged with placebo or vitamin-mineral tablets. Both reported significantly greater gains in intelligence among the actives. The findings were important because of the apparent inadequacy of diet they revealed, and the magnitude of the potential for increased intelligence. However, 5 of 11 replications were not significant, leaving the issue in doubt. To determine if school children who receive low-dose vitamin-mineral tablets produce significantly higher IQ scores than children who receive placebo. A macrolevel analysis of the 13 known randomized, double-blind trials was undertaken. A total of 15 public schools in Arizona, California, Missouri, Oklahoma, Belgium, England, Scotland, and Wales participated, with 1477 school children, aged 6 to 17 years, and 276 young adult males, aged 18 to 25 years, in 2 American correctional facilities. All studies used 1 of 3 standardized tests of nonverbal intelligence: the Wechsler Intelligence Scale for Children-Revised, the Wechsler Adult Intelligence Scale, or the Calvert Non-verbal test. The activities in each study performed better, on average, than placebo in nonverbal IQ, regardless of formula, location, age, race, gender, or research team composition. The probability of 13 randomly selected experimental groups always performing better than 13 randomly selected independent control groups is one-half to the 13th power (p = 0.000122). The mean difference across all studies is 3.2 IQ points. Furthermore, the standard deviation in the variable "IQ change" was also consistently larger in each active group when compared to its controls. This confirms that a few children in each study, presumably the poorly nourished minority, were producing large differences, rather than a 3.2 point gain in all active children. There are important health risks when school children's dietary habits depart substantially from government guidelines; poor dietary habits may lead to impaired intelligence. Low-dose vitamin-mineral supplementation may restore the cognitive abilities of these children by raising low blood nutrient concentrations. However, there is also evidence that supplementation has no measurable effect on the intelligence of well-nourished children with normal blood nutrient concentrations.
A Systematic Review and Meta-Analysis of Baseline Ohip-Edent Scores.
Duale, J M J; Patel, Y A; Wu, J; Hyde, T P
2018-03-01
OHIP-EDENT is widely used in the literature to assess Oral-Health-Related-Quality-of-Life (OHRQoL) for edentulous patients. However the normal variance and mean of the baseline OHIP scores has not been reported. It would facilitate critical appraisal of studies if we had knowledge of the normal variation and mean of baseline OHIP-EDENT scores. An established figure for baseline OHIP-EDENT, obtained from a meta-analysis, would simplify comparisons of studies and quantify variations in initial OHRQoL of the trial participants. The aim of this study is to quantify a normal baseline value for pre-operative OHIP-EDENT scores by a systematic review and meta-analysis of the available literature. A systematic literature review was carried. 83 papers were identified that included OHIP-EDENT values. After screening and eligibility assessment, 7 papers were selected and included in the meta-analysis. A meta-analysis for the 7 papers by a random-effect model yielded a mean baseline OHIP-EDENT score of 28.63 with a 95% Confidence intervals from 21.93 to 35.34. A pre-operative baseline OHIP-EDENT has been established by meta-analysis of published papers. This will facilitate the comparison of the initial OHRQoL of one study population to that found elsewhere in the published literature. Copyright© 2018 Dennis Barber Ltd.
Zawbaa, Hossam M; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven.
Zawbaa, Hossam M.; Szlȩk, Jakub; Grosan, Crina; Jachowicz, Renata; Mendyk, Aleksander
2016-01-01
Poly-lactide-co-glycolide (PLGA) is a copolymer of lactic and glycolic acid. Drug release from PLGA microspheres depends not only on polymer properties but also on drug type, particle size, morphology of microspheres, release conditions, etc. Selecting a subset of relevant properties for PLGA is a challenging machine learning task as there are over three hundred features to consider. In this work, we formulate the selection of critical attributes for PLGA as a multiobjective optimization problem with the aim of minimizing the error of predicting the dissolution profile while reducing the number of attributes selected. Four bio-inspired optimization algorithms: antlion optimization, binary version of antlion optimization, grey wolf optimization, and social spider optimization are used to select the optimal feature set for predicting the dissolution profile of PLGA. Besides these, LASSO algorithm is also used for comparisons. Selection of crucial variables is performed under the assumption that both predictability and model simplicity are of equal importance to the final result. During the feature selection process, a set of input variables is employed to find minimum generalization error across different predictive models and their settings/architectures. The methodology is evaluated using predictive modeling for which various tools are chosen, such as Cubist, random forests, artificial neural networks (monotonic MLP, deep learning MLP), multivariate adaptive regression splines, classification and regression tree, and hybrid systems of fuzzy logic and evolutionary computations (fugeR). The experimental results are compared with the results reported by Szlȩk. We obtain a normalized root mean square error (NRMSE) of 15.97% versus 15.4%, and the number of selected input features is smaller, nine versus eleven. PMID:27315205
Hadders-Algra, Mijna
2001-01-01
The Neuronal Group Selection Theory (NGST) could offer new insights into the mechanisms directing motor disorders, such as cerebral palsy and developmental coordination disorder. According to NGST, normal motor development is characterized by two phases of variability. Variation is not at random but determined by criteria set by genetic information. Development starts with the phase of primary variability,during which variation in motor behavior is not geared to external conditions. At function-specific ages secondary variability starts, during which motor performance can be adapted to specific situations. In both forms, of variability, selection on the basis of afferent information plays a significant role. From the NGST point of view, children with pre- or perinatally acquired brain damage, such as children with cerebral palsy and part of the children with developmental coordination disorder, suffer from stereotyped motor behavior, produced by a limited repertoire or primary (sub)cortical neuronal networks. These children also have roblems in selecting the most efficient neuronal activity, due to deficits in the processing of sensory information. Therefore, NGST suggests that intervention in these children at early age should aim at an enlargement of the primary neuronal networks. With increasing age, the emphasis of intervention could shift to the provision of ample opportunities for active practice, which might form a compensation for the impaired selection. PMID:11530887
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
Associated factors with attention deficit hyperactivity disorder (ADHD): a case-control study.
Malek, Ayyoub; Amiri, Shahrokh; Sadegfard, Majid; Abdi, Salman; Amini, Saeedeh
2012-09-01
The current study attempted to investigate factors associated with attention deficit hyperactivity disorder (ADHD) in children without co-morbidities. In this case-control study, 164 ADHD children who attended the Child and Adolescent Psychiatric Clinics of Tabriz University of Medical Sciences, Iran were compared with 166 normal children selected in a random-cluster method from primary and secondary schools. Clinical interviews based on DSM-IV-TR using K-SADS were used to diagnose ADHD cases and to select the control group. Participants were matched for age. We used chi-square and binary logistic regression for data analysis. Among the associated factors with ADHD were gender and maternal employment. Boys (OR 0.54; 95% confidence interval: 0.34 - 0.86) and those children with working mothers (OR 0.16: 95% confidence interval: 0.06 - 0.86) suffered more from ADHD. The birth season, family size, birth order, and parental kinship were not among risk factors for ADHD. The results of the study show that maternal employment and male gender are among the associated risk factors for ADHD.
Thong, Kwai-Lin; Tang, Swee-Seong; Tan, Wen-Siang; Devi, Shamala
2007-01-01
Polyclonal sera from typhoid patients and a monoclonal antibody, mAb ATVi, which recognizes the capsular polysaccharide Vi antigen (ViCPS), were used to select for peptides that mimic the ViCPS by using a phage-displayed random 12-mer peptide library. Two major common mimotopes selected from the library carried the amino acid sequences TSHHDSHGLHRV and ENHSPVNIAHKL. Enzyme-linked immunosorbent assays (ELISAs) showed that these peptides carry mimotopes to ViCPS. Phage clones that contained the 12-mer peptides were also tested against pooled/individual typhoid patients' sera and found to have 3 to 5 times higher binding compared to normal sera. By using Phage-ELISA assays, the derived synthetic peptides, TSHHDSHGLHRV and ENHSPVNIAHKL, were tested against a monoclonal antibody mAb ATVi and over 2-fold difference in binding was found between these peptides and a control unrelated peptide, CTLTTKLYC. Inhibition of the mAb's binding to ViCPS indicated that the synthetic peptides successfully competed with the capsular polysaccharide for antibody binding.
Wang, Yan-ling; Ge, Peng-fei; Ma, Qi-yi; Cao, Yong-qin; Li, Hong-bo; Zheng, Jing; Shi, Wen-quan; Sun, Wei
2012-02-01
To investigate the relationship between iodine nutrition and growth/development in infants at the key period of brain development. All women from pregnancy to the end of lactation and the weaning infants within 3 years in the Linxia Hui Autonomous Prefecture (Linxia Prefecture) were added iodized oil in 2006 - 2010. In 2006, 2010 one town was randomly selected from each of the five directions (east, south, west, north, central) of each county in Linxia Prefecture. One village was chosen from every town and 20 infants, 20 pregnant women and 20 lactating women were randomly selected in each town. Urinary iodine (UI) of the infants, pregnant and lactating women were determined. DQ value, height and weight of part of infants were measured. According to the above sampling plan, UI of pregnant women, lactating women and infants had been monitored every year after intervention. 0-3 infants were choosing to be control before intervention. UI of 1056 and 2989 0-3 infants were investigated before and after the iodine oil intervention. After the 'iodine oil' intervention, the median UI of infants increased from 107.3 µg/L to 139.6 - 190.7 µg/L, the percentage of UI level that lower than 50 µg/L, decreased from 23.9% to 6.7% - 12.9%. DQ value increased from 92.8 to 104.3, the percentage of normal height and above increased from 65.0% to 82.1% and the percentage of the normal weight and above, increased from 59.3% to 81.4%. The outcomes of DQ value, height and weight showed statistically significant differences, compared to the pre-intervention outcomes (P < 0.05). The median UI of pregnant and lactating women increased from 89.3 µg/L to 118.2 - 187.8 µg/L and from 84.9 µg/L to 135.2 - 187.5 µg/L respectively. Infant's growth and development were retarded when iodine deficiency existed at the key period of brain development. Intake of oral iodine oil at key period of brain development could provide adequate nutrition thus improve growth and development on infants.
Gastroenteritis Therapies in Developed Countries: Systematic Review and Meta-Analysis
Freedman, Stephen B.; Pasichnyk, Dion; Black, Karen J. L.; Fitzpatrick, Eleanor; Gouin, Serge; Milne, Andrea; Hartling, Lisa
2015-01-01
Context Gastroenteritis remains a leading cause of childhood morbidity. Objective Because prior reviews have focused on isolated symptoms and studies conducted in developing countries, this study focused on interventions commonly considered for use in developed countries. Intervention specific, patient-centered outcomes were selected. Data Sources MEDLINE, EMBASE, the Cochrane Database of Systematic Reviews, trial registries, grey literature, and scientific meetings. Study Selection Randomized controlled trials, conducted in developed countries, of children aged <18 years, with gastroenteritis, performed in emergency department or outpatient settings which evaluated oral rehydration therapy (ORT), antiemetics, probiotics or intravenous fluid administration rate. Data Extraction The study was conducted in accordance with the Cochrane Handbook for Systematic Reviews of Interventions and the PRISMA guidelines. Data were independently extracted by multiple investigators. Analyses employed random effects models. Results 31 trials (4,444 patients) were included. ORT: Compared with intravenous rehydration, hospitalization (RR 0.80, 95%CI 0.24, 2.71) and emergency department return visits (RR 0.86, 95%CI 0.39, 1.89) were similar. Antiemetics: Fewer children administered an antiemetic required intravenous rehydration (RR 0.40, 95%CI 0.26, 0.60) While the data could not be meta-analyzed, three studies reported that ondansetron administration does increase the frequency of diarrhea. Probiotics: No studies reported on the primary outcome, three studies evaluated hospitalization within 7 days (RR 0.87, 95%CI 0.25, 2.98). Rehydration: No difference in length of stay was identified for rapid vs. standard intravenous or nasogastric rehydration. A single study found that 5% dextrose in normal saline reduced hospitalizations compared with normal saline alone (RR 0.70, 95% CI 0.53, 0.92). Conclusions There is a paucity of patient-centered outcome evidence to support many interventions. Since ORT is a low-cost, non-invasive intervention, it should continue to be used. Routine probiotic use cannot be endorsed at this time in outpatient children with gastroenteritis. Despite some evidence that ondansetron administration increases diarrhea frequency, emergency department use leads to reductions in intravenous rehydration and hospitalization. No benefits were associated with ondansetron use following emergency department discharge. PMID:26075617
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Dornase alpha compared to hypertonic saline for lung atelectasis in critically ill patients.
Youness, Houssein A; Mathews, Kathryn; Elya, Marwan K; Kinasewitz, Gary T; Keddissi, Jean I
2012-12-01
Despite the lack of randomized trials, nebulized Dornase alpha and hypertonic saline are used empirically to treat atelectasis in mechanically ventilated patients. Our objective was to determine the clinical and radiological efficacy of these medications as an adjunct to standard therapy in critically ill patients. Mechanically ventilated patients with new onset (<48 h) lobar or multilobar atelectasis were randomized into three groups: nebulized Dornase alpha, hypertonic (7%) saline or normal saline every 12 h. All patients received standard therapy, including chest percussion therapy, kinetic therapy, and bronchodilators. The primary endpoint was the change in the daily chest X-ray atelectasis score. A total of 33 patients met the inclusion criteria and were randomized equally into the three groups. Patients in the Dornase alpha group showed a reduction of 2.18±1.33 points in the CXR score from baseline to day 7, whereas patients in the normal saline group had a reduction of 1.00±1.79 points, and patients in the hypertonic saline group showed a score reduction of 1.09±1.51 points. Pairwise comparison of the mean change of the CXR score showed no statistical difference between hypertonic saline, normal saline, and dornase alpha. Airway pressures as well as oxygenation, expressed as PaO(2)/F(I)O(2) and time to extubation also were similar among groups. During the study period the rate of extubation was 54% (6/11), 45% (5/11), and 63% (7/11) in the normal saline, hypertonic saline, and Dornase alpha groups, respectively (p=0.09). No treatment related complications were observed. There was no significant improvement in the chest X-ray atelectasis score in mechanically ventilated patients with new onset atelectasis who were nebulized with Dornase alpha twice a day. Hypertonic saline was no more effective than normal saline in this population. Larger randomized control trials are needed to confirm our results.
Spectral statistics of the acoustic stadium
NASA Astrophysics Data System (ADS)
Méndez-Sánchez, R. A.; Báez, G.; Leyvraz, F.; Seligman, T. H.
2014-01-01
We calculate the normal-mode frequencies and wave amplitudes of the two-dimensional acoustical stadium. We also obtain the statistical properties of the acoustical spectrum and show that they agree with the results given by random matrix theory. Some normal-mode wave amplitudes showing scarring are presented.
Added sugars and risk factors for obesity, diabetes and heart disease.
Rippe, J M; Angelopoulos, T J
2016-03-01
The effects of added sugars on various chronic conditions are highly controversial. Some investigators have argued that added sugars increase the risk of obesity, diabetes and cardiovascular disease. However, few randomized controlled trials are available to support these assertions. The literature is further complicated by animal studies, as well as studies which compare pure fructose to pure glucose (neither of which is consumed to any appreciable degree in the human diet) and studies where large doses of added sugars beyond normal levels of human consumption have been administered. Various scientific and public health organizations have offered disparate recommendations for upper limits of added sugar. In this article, we will review recent randomized controlled trials and prospective cohort studies. We conclude that the normal added sugars in the human diet (for example, sucrose, high-fructose corn syrup and isoglucose) when consumed within the normal range of normal human consumption or substituted isoenergetically for other carbohydrates, do not appear to cause a unique risk of obesity, diabetes or cardiovascular disease.
NASA Astrophysics Data System (ADS)
Monthus, Cécile
2018-06-01
For random interacting Majorana models where the only symmetries are the parity P and the time-reversal-symmetry T, various approaches are compared to construct exact even and odd normalized zero modes Γ in finite size, i.e. Hermitian operators that commute with the Hamiltonian, that square to the identity, and that commute (even) or anticommute (odd) with the parity P. Even normalized zero-modes are well known under the name of ‘pseudo-spins’ in the field of many-body-localization or more precisely ‘local integrals of motion’ (LIOMs) in the many-body-localized-phase where the pseudo-spins happens to be spatially localized. Odd normalized zero-modes are popular under the name of ‘Majorana zero modes’ or ‘strong zero modes’. Explicit examples for small systems are described in detail. Applications to real-space renormalization procedures based on blocks containing an odd number of Majorana fermions are also discussed.
Evolving artificial metalloenzymes via random mutagenesis
NASA Astrophysics Data System (ADS)
Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.
2018-03-01
Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.
Elloumi, Fathi; Hu, Zhiyuan; Li, Yan; Parker, Joel S; Gulley, Margaret L; Amos, Keith D; Troester, Melissa A
2011-06-30
Genomic tests are available to predict breast cancer recurrence and to guide clinical decision making. These predictors provide recurrence risk scores along with a measure of uncertainty, usually a confidence interval. The confidence interval conveys random error and not systematic bias. Standard tumor sampling methods make this problematic, as it is common to have a substantial proportion (typically 30-50%) of a tumor sample comprised of histologically benign tissue. This "normal" tissue could represent a source of non-random error or systematic bias in genomic classification. To assess the performance characteristics of genomic classification to systematic error from normal contamination, we collected 55 tumor samples and paired tumor-adjacent normal tissue. Using genomic signatures from the tumor and paired normal, we evaluated how increasing normal contamination altered recurrence risk scores for various genomic predictors. Simulations of normal tissue contamination caused misclassification of tumors in all predictors evaluated, but different breast cancer predictors showed different types of vulnerability to normal tissue bias. While two predictors had unpredictable direction of bias (either higher or lower risk of relapse resulted from normal contamination), one signature showed predictable direction of normal tissue effects. Due to this predictable direction of effect, this signature (the PAM50) was adjusted for normal tissue contamination and these corrections improved sensitivity and negative predictive value. For all three assays quality control standards and/or appropriate bias adjustment strategies can be used to improve assay reliability. Normal tissue sampled concurrently with tumor is an important source of bias in breast genomic predictors. All genomic predictors show some sensitivity to normal tissue contamination and ideal strategies for mitigating this bias vary depending upon the particular genes and computational methods used in the predictor.
CDC6600 subroutine for normal random variables. [RVNORM (RMU, SIG)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Amos, D.E.
1977-04-01
A value y for a uniform variable on (0,1) is generated and a table of 96-percent points for the (0,1) normal distribution is interpolated for a value of the normal variable x(0,1) on 0.02 less than or equal to y less than or equal to 0.98. For the tails, the inverse normal is computed by a rational Chebyshev approximation in an appropriate variable. Then X = x sigma + ..mu.. gives the X(..mu..,sigma) variable.
Detection of flow limitation in obstructive sleep apnea with an artificial neural network.
Norman, Robert G; Rapoport, David M; Ayappa, Indu
2007-09-01
During sleep, the development of a plateau on the inspiratory airflow/time contour provides a non-invasive indicator of airway collapsibility. Humans recognize this abnormal contour easily, and this study replicates this with an artificial neural network (ANN) using a normalized shape. Five 10 min segments were selected from each of 18 sleep records (respiratory airflow measured with a nasal cannula) with varying degrees of sleep disordered breathing. Each breath was visually scored for shape, and breaths split randomly into a training and test set. Equally spaced, peak amplitude normalized flow values (representing breath shape) formed the only input to a back propagation ANN. Following training, breath-by-breath agreement of the ANN with the manual classification was tabulated for the training and test sets separately. Agreement of the ANN was 89% in the training set and 70.6% in the test set. When the categories of 'probably normal' and 'normal', and 'probably flow limited' and 'flow limited' were combined, the agreement increased to 92.7% and 89.4% respectively, similar to the intra- and inter-rater agreements obtained by a visual classification of these breaths. On a naive dataset, the agreement of the ANN to visual classification was 57.7% overall and 82.4% when the categories were collapsed. A neural network based only on the shape of inspiratory airflow succeeded in classifying breaths as to the presence/absence of flow limitation. This approach could be used to provide a standardized, reproducible and automated means of detecting elevated upper airway resistance.
Fernandez-Mendoza, Julio; Calhoun, Susan L.; Bixler, Edward O.; Karataraki, Maria; Liao, Duanping; Vela-Bueno, Antonio; Ramos-Platon, María Jose; Sauder, Katherine A.; Basta, Maria; Vgontzas, Alexandros N.
2011-01-01
Objective Sleep misperception is considered by some investigators a common characteristic of chronic insomnia, whereas others propose it as a separate diagnosis. The frequency and the determinants of sleep misperception in general population samples are unknown. In this study we examined the role of objective sleep duration, a novel marker in phenotyping insomnia, and psychological profiles on sleep misperception in a large, general population sample. Methods 142 insomniacs and 724 controls selected from a general random sample of 1,741 individuals (age ≥ 20 years) underwent a polysomnographic evaluation, completed the Minnesota Multiphasic Personality Inventory-2, and were split into two groups based on their objective sleep duration: “normal sleep duration” (≥ 6 hours) and “short sleep duration” (< 6 hours). Results The discrepancy between subjective and objective sleep duration was determined by two independent factors. Short sleepers reported more sleep than they objectively had and insomniacs reported less sleep than controls with similar objective sleep duration. The additive effect of these two factors resulted in underestimation only in insomniacs with normal sleep duration. Insomniacs with normal sleep duration showed a MMPI-2 profile of high depression and anxiety, and low ego strength, whereas insomniacs with short sleep duration showed a profile of a medical disorder. Conclusions Underestimation of sleep duration is prevalent among insomniacs with objective normal sleep duration. Anxious-ruminative traits and poor resources for coping with stress appear to mediate the underestimation of sleep duration. These data further support the validity and clinical utility of objective sleep measures in phenotyping insomnia. PMID:20978224
Gavaravarapu, SubbaRao M; Rao, K Mallikarjuna; Nagalla, Balakrishna; Avula, Laxmaiah
2015-01-01
To assess the differences in risk perceptions of overweight/obese and normal-weight adolescents about obesity and associated risk factors. Qualitative study using focus group discussions (FGDs). Five randomly selected schools from the South Indian city of Hyderabad. Seventy-nine adolescents (ages 11-14 years) participated in 10 FGDs (5 each with overweight/obese and normal-weight groups). Whether obesity-related risk perceptions differ with actual weight status or not. FGDs were recorded, transcribed, and manually coded for thematic analysis. Results were presented according to 6 themes. At each stage of coding and analysis, reports were read independently by 2-3 researchers and the inter-coder reliability was high (ratio of number of agreements against the sum of agreements plus disagreements was over 90%). Adolescents across the groups had limited understanding of nutrition during adolescence as well as causes and consequences of obesity. The optimistic bias that they were less vulnerable compared to others to the risks of obesity was evident from perceptions of overweight groups. While overweight adolescents argued that obesity was hereditary, the normal-weight participants perceived "faulty food habits" and laziness as the reasons. Adolescents across the groups considered fruits and vegetables as healthy foods. There were clear differences in perceptions of adolescents of different weight status. Employing the risk perception analysis framework, this study identified the following adolescent traits: responsive, avoidance, and indifference, which may be useful for developing nutrition communication programs. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
The neurological safety of epidural parecoxib in rats.
Kim, Yang Hyun; Lee, Pyung Bok; Park, Jeongmi; Lim, Young Jin; Kim, Yong Chul; Lee, Sang Chul; Ahn, Wonsik
2011-12-01
Epidural injection of cyclooxygenase-2 inhibitors has been suggested as a useful therapeutic modality in pain management in animal studies and clinical settings. Direct epidural administration of parecoxib, a highly selective cyclooxygenase-2 inhibitor, may have advantages over its parenteral administration regarding required dose, side effects, and efficacy. However, no animal studies have been performed to investigate the possible neurotoxicity of epidurally injected parecoxib. Therefore, the present study was performed to assess the neurotoxicity of epidurally injected parecoxib in rats. Rats (n=45) were randomly divided into three groups: normal saline group (group N, n=15), ethanol group (group E, n=15), and parecoxib group (group P, n=15). 0.3 mL of epidural parecoxib (6 mg) and the same volume of epidural ethanol or normal saline were injected into the epidural space. Neurologic assessment was performed 3, 7 and 21 days after the injection by pinch toe testing. Histologic changes were evaluated for vacuolation of the dorsal funiculus, chromatolytic changes of the motor neurons, neuritis, and meningeal inflammation. All rats in groups N and P showed normal response to pinch-toe testing and had a normal gait at each observation point. Histological examination showed no evidence suggestive of neuronal body or axonal lesions, gliosis, or myelin sheet damage in group N or P at any time. However, all rats in group E showed sensory-motor dysfunction, behavioral change, or histopathological abnormalities. No neurotoxicity on the spinal cord or abnormalities in sensorimotor function or behavior was noted in rats that received epidural parecoxib. Copyright © 2011 Elsevier Inc. All rights reserved.
Malo, Aurelio F; Martinez-Pastor, Felipe; Alaks, Glen; Dubach, Jean; Lacy, Robert C
2010-10-01
Mice (Peromyscus leucopus noveboracensis) from a captive-breeding program were used to test the effects of three genetic breeding protocols (minimizing mean kinship [MK], random breeding, and selection for docility [DOC]) and inbreeding levels on sperm traits and fertility. Earlier, in generation 8, one DOC replicate went extinct because of poor reproductive success. By generation 10, spermatozoa from DOC mice had more acrosome and midpiece abnormalities, which were shown to be strong determinants of fertility, as well as lower sperm production and resistance to osmotic stress. In addition, determinants of fertility, including male and female components, were assessed in a comprehensive manner. Results showed that the probability (P) of siring litters is determined by sperm number, sperm viability, and midpiece and acrosome abnormalities; that the P of siring one versus two litters is determined by tail abnormalities; and that the total number of offspring is influenced by female size and proportion of normal sperm, showing the relative importance of different sperm traits on fertility. On average, males with 20% normal sperm sired one pup per litter, and males with 70% normal sperm sired eight pups per litter. Interestingly, the proportion of normal sperm was affected by docility but not by relatively low inbreeding. However, inbreeding depression in sperm motility was detected. In the MK group, inbreeding depression not only affected sperm motility but also fertility: An increase in the coefficient of inbreeding (f) of 0.03 reduced sperm motility by 30% and translated into an offspring reduction of three pups in second litters. A genetic load of 48 fecundity equivalents was calculated.
Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois
NASA Astrophysics Data System (ADS)
Sangunett, B.; Dewalt, R.
2005-05-01
As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.
Key Aspects of Nucleic Acid Library Design for in Vitro Selection
Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.
2018-01-01
Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
Tsai, Richard Tzong-Han; Sung, Cheng-Lung; Dai, Hong-Jie; Hung, Hsieh-Chuan; Sung, Ting-Yi; Hsu, Wen-Lian
2006-12-18
Biomedical named entity recognition (Bio-NER) is a challenging problem because, in general, biomedical named entities of the same category (e.g., proteins and genes) do not follow one standard nomenclature. They have many irregularities and sometimes appear in ambiguous contexts. In recent years, machine-learning (ML) approaches have become increasingly common and now represent the cutting edge of Bio-NER technology. This paper addresses three problems faced by ML-based Bio-NER systems. First, most ML approaches usually employ singleton features that comprise one linguistic property (e.g., the current word is capitalized) and at least one class tag (e.g., B-protein, the beginning of a protein name). However, such features may be insufficient in cases where multiple properties must be considered. Adding conjunction features that contain multiple properties can be beneficial, but it would be infeasible to include all conjunction features in an NER model since memory resources are limited and some features are ineffective. To resolve the problem, we use a sequential forward search algorithm to select an effective set of features. Second, variations in the numerical parts of biomedical terms (e.g., "2" in the biomedical term IL2) cause data sparseness and generate many redundant features. In this case, we apply numerical normalization, which solves the problem by replacing all numerals in a term with one representative numeral to help classify named entities. Third, the assignment of NE tags does not depend solely on the target word's closest neighbors, but may depend on words outside the context window (e.g., a context window of five consists of the current word plus two preceding and two subsequent words). We use global patterns generated by the Smith-Waterman local alignment algorithm to identify such structures and modify the results of our ML-based tagger. This is called pattern-based post-processing. To develop our ML-based Bio-NER system, we employ conditional random fields, which have performed effectively in several well-known tasks, as our underlying ML model. Adding selected conjunction features, applying numerical normalization, and employing pattern-based post-processing improve the F-scores by 1.67%, 1.04%, and 0.57%, respectively. The combined increase of 3.28% yields a total score of 72.98%, which is better than the baseline system that only uses singleton features. We demonstrate the benefits of using the sequential forward search algorithm to select effective conjunction feature groups. In addition, we show that numerical normalization can effectively reduce the number of redundant and unseen features. Furthermore, the Smith-Waterman local alignment algorithm can help ML-based Bio-NER deal with difficult cases that need longer context windows.
Meta-analysis of two studies in the presence of heterogeneity with applications in rare diseases.
Friede, Tim; Röver, Christian; Wandel, Simon; Neuenschwander, Beat
2017-07-01
Random-effects meta-analyses are used to combine evidence of treatment effects from multiple studies. Since treatment effects may vary across trials due to differences in study characteristics, heterogeneity in treatment effects between studies must be accounted for to achieve valid inference. The standard model for random-effects meta-analysis assumes approximately normal effect estimates and a normal random-effects model. However, standard methods based on this model ignore the uncertainty in estimating the between-trial heterogeneity. In the special setting of only two studies and in the presence of heterogeneity, we investigate here alternatives such as the Hartung-Knapp-Sidik-Jonkman method (HKSJ), the modified Knapp-Hartung method (mKH, a variation of the HKSJ method) and Bayesian random-effects meta-analyses with priors covering plausible heterogeneity values; R code to reproduce the examples is presented in an appendix. The properties of these methods are assessed by applying them to five examples from various rare diseases and by a simulation study. Whereas the standard method based on normal quantiles has poor coverage, the HKSJ and mKH generally lead to very long, and therefore inconclusive, confidence intervals. The Bayesian intervals on the whole show satisfying properties and offer a reasonable compromise between these two extremes. © 2016 The Authors. Biometrical Journal published by WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Gilles de la Tourette's syndrome in special education schools: a United Kingdom study.
Eapen, V; Robertson, M M; Zeitlin, H; Kurlan, R
1997-06-01
In order to determine the prevalence of tic disorders in children with severe school problems requiring a residential facility and comparison groups of children in regular day schools, we performed direct clinical examinations for the presence of tics and Gilles de la Tourette's syndrome (GTS) in 20 children from a residential school for emotional and behavioral difficulties (EBD); 25 children from a residential school for learning disabilities; 17 "problem" children (PC) (identified by teachers as having academic or behaviour problems) and 19 normal children (NC) selected at random (using random numbers) from a regular school. Of the EBD students, 65% were judged to have definite tics as compared with 24% of students with learning difficulties (P < 0.05), 6% of PC (P < 0.003) and none of the NC (P < 0.0006) group. Most of the affected students met diagnostic criteria for GTS. Our findings suggest that GTS is commonly associated with the need for special education and that this association is particularly robust for children with severe school problems. In these children, the presence of tics may be an indicator of an underlying dysfunction of neurological development.
NASA Astrophysics Data System (ADS)
Taylor, Natalie M.; van Saarloos, Paul P.; Eikelboom, Robert H.
2000-06-01
This study aimed to gauge the effect of the patient's eye movement during Photo Refractive Keratectomy (PRK) on post- operative vision. A computer simulation of both the PRK procedure and the visual outcome has been performed. The PRK simulation incorporated the pattern of movement of the laser beam to perform a given correction, the beam characteristics, an initial corneal profile, and an eye movement scenario; and generated the corrected corneal profile. The regrowth of the epithelium was simulated by selecting the smoothing filter which, when applied to a corrected cornea with no patient eye movement, produced similar ray tracing results to the original corneal model. Ray tracing several objects, such as letters of various contrast and sizes was performed to assess the quality of the post-operative vision. Eye movement scenarios included no eye movement, constant decentration and normally distributed random eye movement of varying magnitudes. Random eye movement of even small amounts, such as 50 microns reduces the contrast sensitivity of the image. Constant decentration decenters the projected image on the retina, and in extreme cases can lead to astigmatism. Eye movements of the magnitude expected during laser refractive surgery have minimal effect on the final visual outcome.
NASA Astrophysics Data System (ADS)
Saputro, D. R. S.; Amalia, F.; Widyaningsih, P.; Affan, R. C.
2018-05-01
Bayesian method is a method that can be used to estimate the parameters of multivariate multiple regression model. Bayesian method has two distributions, there are prior and posterior distributions. Posterior distribution is influenced by the selection of prior distribution. Jeffreys’ prior distribution is a kind of Non-informative prior distribution. This prior is used when the information about parameter not available. Non-informative Jeffreys’ prior distribution is combined with the sample information resulting the posterior distribution. Posterior distribution is used to estimate the parameter. The purposes of this research is to estimate the parameters of multivariate regression model using Bayesian method with Non-informative Jeffreys’ prior distribution. Based on the results and discussion, parameter estimation of β and Σ which were obtained from expected value of random variable of marginal posterior distribution function. The marginal posterior distributions for β and Σ are multivariate normal and inverse Wishart. However, in calculation of the expected value involving integral of a function which difficult to determine the value. Therefore, approach is needed by generating of random samples according to the posterior distribution characteristics of each parameter using Markov chain Monte Carlo (MCMC) Gibbs sampling algorithm.
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Guo, Xinxing; Kong, Xiangbin; Huang, Rui; Jin, Ling; Ding, Xiaohu; He, Mingguang; Liu, Xing; Patel, Mehul Chimanlal; Congdon, Nathan G
2014-01-07
We evaluated the effect of ginkgo biloba extract on visual field defect and contrast sensitivity in a Chinese cohort with normal tension glaucoma. In this prospective, randomized, placebo-controlled crossover study, patients newly diagnosed with normal tension glaucoma, either in a tertiary glaucoma clinic (n = 5) or in a cohort undergoing routine general physical examinations in a primary care clinic (n = 30), underwent two 4-week phases of treatment, separated by a washout period of 8 weeks. Randomization determined whether ginkgo biloba extract (40 mg, 3 times per day) or placebo (identical-appearing tablets) was received first. Primary outcomes were change in contrast sensitivity and mean deviation on 24-2 SITA standard visual field testing, while secondary outcomes included IOP and self-reported adverse events. A total of 35 patients with mean age 63.7 (6.5) years were randomized to the ginkgo biloba extract-placebo (n = 18) or the placebo-ginkgo biloba extract (n = 17) sequence. A total of 28 patients (80.0%, 14 in each group) who completed testing did not differ at baseline in age, sex, visual field mean deviation, contrast sensitivity, IOP, or blood pressure. Changes in visual field and contrast sensitivity did not differ by treatment received or sequence (P > 0.2 for all). Power to have detected a difference in mean defect as large as previously reported was 80%. In contrast to some previous reports, ginkgo biloba extract treatment had no effect on mean defect or contrast sensitivity in this group of normal tension glaucoma patients. (http://www.chictr.org number, ChiCTR-TRC-08000724).
Odegård, J; Klemetsdal, G; Heringstad, B
2005-04-01
Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.
ERIC Educational Resources Information Center
Kariuki, Patrick N. K.; Bush, Elizabeth Danielle
2008-01-01
The purpose of this study was to examine the effects of Total Physical Response by Storytelling and the traditional teaching method on a foreign language in a selected high school. The sample consisted of 30 students who were randomly selected and randomly assigned to experimental and control group. The experimental group was taught using Total…
Spectroscopic Diagnosis of Arsenic Contamination in Agricultural Soils
Shi, Tiezhu; Liu, Huizeng; Chen, Yiyun; Fei, Teng; Wang, Junjie; Wu, Guofeng
2017-01-01
This study investigated the abilities of pre-processing, feature selection and machine-learning methods for the spectroscopic diagnosis of soil arsenic contamination. The spectral data were pre-processed by using Savitzky-Golay smoothing, first and second derivatives, multiplicative scatter correction, standard normal variate, and mean centering. Principle component analysis (PCA) and the RELIEF algorithm were used to extract spectral features. Machine-learning methods, including random forests (RF), artificial neural network (ANN), radial basis function- and linear function- based support vector machine (RBF- and LF-SVM) were employed for establishing diagnosis models. The model accuracies were evaluated and compared by using overall accuracies (OAs). The statistical significance of the difference between models was evaluated by using McNemar’s test (Z value). The results showed that the OAs varied with the different combinations of pre-processing, feature selection, and classification methods. Feature selection methods could improve the modeling efficiencies and diagnosis accuracies, and RELIEF often outperformed PCA. The optimal models established by RF (OA = 86%), ANN (OA = 89%), RBF- (OA = 89%) and LF-SVM (OA = 87%) had no statistical difference in diagnosis accuracies (Z < 1.96, p < 0.05). These results indicated that it was feasible to diagnose soil arsenic contamination using reflectance spectroscopy. The appropriate combination of multivariate methods was important to improve diagnosis accuracies. PMID:28471412
Effect of kshara basti and nirgundi ghana vati on amavata (rheumatoid arthritis).
Thanki, Krishna; Bhatt, Nilesh; Shukla, V D
2012-01-01
Ayurveda has taken the foremost place in the management of crippling disease, one of them is Amavata which can be compared with Rheumatoid Arthritis due to its clinical appearance. Due to wide spectrum of disease, much prevalence in the society and lack of effective medicament, the disease is being chosen for the study. The line of treatment described for the disease in Chakradatta can be summarized under following captions. i.e to bring Agni to normal state to digest Ama, and eliminate vitiated Vata and Ama. Thus, here Kshara Basti is selected for the present study as Samshodhana process which corrects all of above captions. It is mentioned in Chikitsa Sutra described by Chakradatta. Nirgundi has Amavatahara property which is stated by Bhavaprakasha, considering which Nirgundi Patra Ghanavati is selected as Shamana drug. Total 50 randomly selected patients of Amavata were registered among them 45 were completed the treatment. Kshara Basti in the format of Kala Basti was given to these patients and Nirgundi Ghana Vati was given for one month. Statistically significant improvement was found in ESR, RA factor (quantitative) and also highly significant results were found in symptoms of Amavata. Moderate improvement was seen in 40% of patients, 35.56% patients got marked improvement, while mild improvement was found in 24.44% of patients.
Effect of Kshara Basti and Nirgundi Ghana Vati on Amavata (Rheumatoid Arthritis)
Thanki, Krishna; Bhatt, Nilesh; Shukla, V. D.
2012-01-01
Ayurveda has taken the foremost place in the management of crippling disease, one of them is Amavata which can be compared with Rheumatoid Arthritis due to its clinical appearance. Due to wide spectrum of disease, much prevalence in the society and lack of effective medicament, the disease is being chosen for the study. The line of treatment described for the disease in Chakradatta can be summarized under following captions. i.e to bring Agni to normal state to digest Ama, and eliminate vitiated Vata and Ama. Thus, here Kshara Basti is selected for the present study as Samshodhana process which corrects all of above captions. It is mentioned in Chikitsa Sutra described by Chakradatta. Nirgundi has Amavatahara property which is stated by Bhavaprakasha, considering which Nirgundi Patra Ghanavati is selected as Shamana drug. Total 50 randomly selected patients of Amavata were registered among them 45 were completed the treatment. Kshara Basti in the format of Kala Basti was given to these patients and Nirgundi Ghana Vati was given for one month. Statistically significant improvement was found in ESR, RA factor (quantitative) and also highly significant results were found in symptoms of Amavata. Moderate improvement was seen in 40% of patients, 35.56% patients got marked improvement, while mild improvement was found in 24.44% of patients. PMID:23049184
Elective bladder-sparing treatment for muscle invasive bladder cancer.
Lendínez-Cano, G; Rico-López, J; Moreno, S; Fernández Parra, E; González-Almeida, C; Camacho Martínez, E
2014-01-01
Radical cystectomy is the standard treatment for localised muscle invasive bladder cancer (MIBC). We offer a bladder-sparing treatment with TURB +/- Chemotherapy+Radiotherapy to selected patients as an alternative. We analyze, retrospectively, 30 patients diagnosed with MIBC from March 1991 to October 2010. The mean age was 62.7 years (51-74). All patients were candidates for a curative treatment, and underwent strict selection criteria: T2 stage, primary tumor, solitary lesion smaller than 5cm with a macroscopic disease-free status after TURB, negative random biopsy without hydronephrosis. Staging CT evaluation was normal. Restaging TURB or tumor bed biopsy showed a disease-free status or microscopic muscle invasion. 14 patients underwent TURB alone, 13 TURB+Chemotherapy and 3 TURB+Chemotherapy+Radiotherapy. The mean follow up was 88.7 months (19-220). 14 patients remained disease free (46.6%), 10 had recurrent non-muscle invasive bladder cancer (33%). 81.3% complete clinical response. 71% bladder preserved at 5-years. Overall, 5-years survival rate was 79% and 85% cancer-specific survival rate. Although radical cystectomy is the standard treatment for localised MIBC, in strictly selected cases, bladder-sparing treatment offers an alternative with good long term results. Copyright © 2013 AEU. Published by Elsevier Espana. All rights reserved.
Joyce, Priya; Kuwahata, Melissa; Turner, Nicole; Lakshmanan, Prakash
2010-02-01
A reproducible method for transformation of sugarcane using various strains of Agrobacterium tumefaciens (A. tumefaciens) (AGL0, AGL1, EHA105 and LBA4404) has been developed. The selection system and co-cultivation medium were the most important factors determining the success of transformation and transgenic plant regeneration. Plant regeneration at a frequency of 0.8-4.8% occurred only when callus was transformed with A. tumefaciens carrying a newly constructed superbinary plasmid containing neomycin phosphotransferase (nptII) and beta-glucuronidase (gusA) genes, both driven by the maize ubiquitin (ubi-1) promoter. Regeneration was successful in plants carrying the nptII gene but not the hygromycin phosphotransferase (hph) gene. NptII gene selection was imposed at a concentration of 150 mg/l paromomycin sulphate and applied either immediately or 4 days after the co-cultivation period. Co-cultivation on Murashige and Skoog (MS)-based medium for a period of 4 days produced the highest number of transgenic plants. Over 200 independent transgenic lines were created using this protocol. Regenerated plants appeared phenotypically normal and contained both gusA and nptII genes. Southern blot analysis revealed 1-3 transgene insertion events that were randomly integrated in the majority of the plants produced.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, J.A.
This study was designed to investigate potential adverse reproductive outcome in veterinary personnel who are exposed to waste anesthetic gas and vapor at levels near the NIOSH recommended standards. Subjects for this case-control study of births with congenital abnormalities and spontaneous abortion, selected from the American Veterinary Medical Association roster, were contacted by mail and asked to complete a screening questionnaire regarding reproductive history. Crude prevalence rates for spontaneous abortion, births with congenital abnormalities and stillbirths, determined on the basis of the responses to the screening questionnaire, showed no excess rates when compared with national statistics. All pregnancies resulting inmore » spontaneous abortion, stillbirth, or birth with congenital abnormality were selected as cases. Controls were selected from the reported normal births on a stratified random basis to match maternal age and pregnancy number for cases. Occupational exposure to waste anesthetic gas and vapor in general was not found to be significantly associated with adverse reproductive outcome when adjustment was made for radiation exposure. For nitrous oxide exposure, however, an odds ratio significantly greater than one was found for spontaneous abortion among female veterinary assistants and wives of exposed male veterinarians. Use of diagnostic x-rays in veterinary practice was associated with spontaneous abortion in exposed females with a statistically significant dose response effect observed in female veterinarians.« less
Zhang, Ju; Lou, Xiaomin; Zellmer, Lucas; Liu, Siqi; Xu, Ningzhi; Liao, D. Joshua
2014-01-01
Sporadic carcinogenesis starts from immortalization of a differentiated somatic cell or an organ-specific stem cell. The immortalized cell incepts a new or quasinew organism that lives like a parasite in the patient and usually proceeds to progressive simplification, constantly engendering intermediate organisms that are simpler than normal cells. Like organismal evolution in Mother Nature, this cellular simplification is a process of Darwinian selection of those mutations with growth- or survival-advantages, from numerous ones that occur randomly and stochastically. Therefore, functional gain of growth- or survival-sustaining oncogenes and functional loss of differentiation-sustaining tumor suppressor genes, which are hallmarks of cancer cells and contribute to phenotypes of greater malignancy, are not drivers of carcinogenesis but are results from natural selection of advantageous mutations. Besides this mutation-load dependent survival mechanism that is evolutionarily low and of an asexual nature, cancer cells may also use cell fusion for survival, which is an evolutionarily-higher mechanism and is of a sexual nature. Assigning oncogenes or tumor suppressor genes or their mutants as drivers to induce cancer in animals may somewhat coerce them to create man-made oncogenic pathways that may not really be a course of sporadic cancer formations in the human. PMID:25594068
Wei, Ling; Li, Yingjie; Yang, Xiaoli; Xue, Qing; Wang, Yuping
2015-10-01
The present study evaluated the topological properties of whole brain networks using graph theoretical concepts and investigated the time-evolution characteristic of brain network in mild cognitive impairment patients during a selective attention task. Electroencephalography (EEG) activities were recorded in 10 MCI patients and 17 healthy subjects when they performed a color match task. We calculated the phase synchrony index between each possible pairs of EEG channels in alpha and beta frequency bands and analyzed the local interconnectedness, overall connectedness and small-world characteristic of brain network in different degree for two groups. Relative to healthy normal controls, the properties of cortical networks in MCI patients tend to be a shift of randomization. Lower σ of MCI had suggested that patients had a further loss of small-world attribute both during active and resting states. Our results provide evidence for the functional disconnection of brain regions in MCI. Furthermore, we found the properties of cortical networks could reflect the processing of conflict information in the selective attention task. The human brain tends to be a more regular and efficient neural architecture in the late stage of information processing. In addition, the processing of conflict information needs stronger information integration and transfer between cortical areas. Copyright © 2015 Elsevier B.V. All rights reserved.
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Wampler, Peter J; Rediske, Richard R; Molla, Azizur R
2013-01-18
A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.
Levine, M W
1991-01-01
Simulated neural impulse trains were generated by a digital realization of the integrate-and-fire model. The variability in these impulse trains had as its origin a random noise of specified distribution. Three different distributions were used: the normal (Gaussian) distribution (no skew, normokurtic), a first-order gamma distribution (positive skew, leptokurtic), and a uniform distribution (no skew, platykurtic). Despite these differences in the distribution of the variability, the distributions of the intervals between impulses were nearly indistinguishable. These inter-impulse distributions were better fit with a hyperbolic gamma distribution than a hyperbolic normal distribution, although one might expect a better approximation for normally distributed inverse intervals. Consideration of why the inter-impulse distribution is independent of the distribution of the causative noise suggests two putative interval distributions that do not depend on the assumed noise distribution: the log normal distribution, which is predicated on the assumption that long intervals occur with the joint probability of small input values, and the random walk equation, which is the diffusion equation applied to a random walk model of the impulse generating process. Either of these equations provides a more satisfactory fit to the simulated impulse trains than the hyperbolic normal or hyperbolic gamma distributions. These equations also provide better fits to impulse trains derived from the maintained discharges of ganglion cells in the retinae of cats or goldfish. It is noted that both equations are free from the constraint that the coefficient of variation (CV) have a maximum of unity.(ABSTRACT TRUNCATED AT 250 WORDS)
Normal-mode selectivity in ultrafast Raman excitations in C60
NASA Astrophysics Data System (ADS)
Zhang, G. P.; George, Thomas F.
2006-01-01
Ultrafast Raman spectra are a powerful tool to probe vibrational excitations, but inherently they are not normal-mode specific. For a system as complicated as C60 , there is no general rule to target a specific mode. A detailed study presented here aims to investigate normal-mode selectivity in C60 by an ultrafast laser. To accurately measure mode excitation, we formally introduce the kinetic-energy-based normal-mode analysis which overcomes the difficulty with the strong lattice anharmonicity and relaxation. We first investigate the resonant excitation and find that mode selectivity is normally difficult to achieve. However, for off-resonant excitations, it is possible to selectively excite a few modes in C60 by properly choosing an optimal laser pulse duration, which agrees with previous experimental and theoretical findings. Going beyond the phenomenological explanation, our study shines new light on the origin of the optimal duration: The phase matching between the laser field and mode vibration determines which mode is strongly excited or suppressed. This finding is very robust and should be a useful guide for future experimental and theoretical studies in more complicated systems.
Normal mode selectivity in ultrafast Raman excitations in C60
NASA Astrophysics Data System (ADS)
Zhang, Guoping; George, Thomas F.
2006-05-01
Ultrafast Raman spectra are a powerful tool to probe vibrational excitations, but inherently they are not normal-mode specific. For a system as complicated as C60, there is no general rule to target a specific mode. A detailed study presented here aims to investigate normal mode selectivity in C60 by an ultrafast laser. To accurately measure mode excitation, we formally introduce the kinetic energy-based normal mode analysis which overcomes the difficulty with the strong lattice anharmonicity and relaxation. We first investigate the resonant excitation and find that mode selectivity is normally difficult to achieve. However, for off-resonant excitations, it is possible to selectively excite a few modes in C60 by properly choosing an optimal laser pulse duration, which agrees with previous experimental and theoretical findings. Going beyond the phenomenological explanation, our study shines new light on the origin of the optimal duration: The phase matching between laser field and mode vibration determines which mode is strongly excited or suppressed. This finding is very robust and may be a useful guide for future experimental and theoretical studies in more complicated systems.
The role of color and attention-to-color in mirror-symmetry perception.
Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane
2016-07-11
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.
The role of color and attention-to-color in mirror-symmetry perception
Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane
2016-01-01
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804
Pregnancy outcomes among women with beta-thalassemia trait.
Charoenboon, Chitrakan; Jatavan, Phudit; Traisrisilp, Kuntharee; Tongsong, Theera
2016-04-01
To compare the obstetric outcomes between pregnant women affected by beta-thalassemia trait and normal controls. A retrospective cohort study was conducted on singleton pregnant women complicated by beta-thalassemia trait and normal controls, randomly selected with the controls-to-case ratio of 2:1. All were low-risk pregnancies without underlying medical diseases and fetal anomalies. The pregnancies undergoing invasive prenatal diagnosis were excluded. A total of 597 pregnant women with beta-thalassemia trait and 1194 controls were recruited. Baseline characteristics and maternal outcomes in the two groups were similar, except that hemoglobin levels were slightly lower in the study group. The prevalence of small for gestational age and preterm birth tended to be higher in the study group but not reached the significant levels but the rate of low birth weight was significantly higher in the study group (relative risk 1.25; 95 % CI 1.00-1.57). Additionally, abortion rate was also significantly higher in the study group (relative risk 3.25; 95 % CI 1.35-7.80). Beta-thalassemia trait could minimally, but significantly, increase risk of low birth weight but did not increase rates of maternal adverse outcomes.
The effectivenes of science domain-based science learning integrated with local potency
NASA Astrophysics Data System (ADS)
Kurniawati, Arifah Putri; Prasetyo, Zuhdan Kun; Wilujeng, Insih; Suryadarma, I. Gusti Putu
2017-08-01
This research aimed to determine the significant effect of science domain-based science learning integrated with local potency toward science process skills. The research method used was a quasi-experimental design with nonequivalent control group design. The population of this research was all students of class VII SMP Negeri 1 Muntilan. The sample of this research was selected through cluster random sampling, namely class VII B as an experiment class (24 students) and class VII C as a control class (24 students). This research used a test instrument that was adapted from Agus Dwianto's research. The aspect of science process skills in this research was observation, classification, interpretation and communication. The analysis of data used the one factor anova at 0,05 significance level and normalized gain score. The significance level result of science process skills with one factor anova is 0,000. It shows that the significance level < alpha (0,05). It means that there was significant effect of science domain-based science learning integrated with local potency toward science learning process skills. The results of analysis show that the normalized gain score are 0,29 (low category) in control class and 0,67 (medium category) in experiment class.
The relationship between consanguineous marriage and death in fetus and infants
Mohammadi, Majid Mehr; Hooman, Heidar Ali; Afrooz, Gholam Ali; Daramadi, Parviz Sharifi
2012-01-01
Background: Given the high prevalence of consanguineous marriages in rural and urban areas of Iran, the aim of this study was to identify its role in increasing fetal and infant deaths. Materials ans Methods: This was a cross-sectional study in which 494 mothers with more than one exceptional child (mentally retarded and physically-dynamically disabled) or with normal children were selected based on multi-stage random sampling method. Data was gathered using the features of parents with more than one exceptional child questionnaire. The validity and reliability of this questionnaire was acceptable. Hierarchical log-linear method was used for statistical analysis. Results: Consanguineous marriage significantly increased the number of births of exceptional children. Moreover, there was a significant relation between the history of fetal/infant death and belonging to the group. There was a significant relation between consanguineous marriage and the history of fetal/infant death which means consanguineous marriage increased the prevalence of fetal/infant death in parents with exceptional children rather than in parents with normal children. Conclusions: The rate of fetal/infant death in exceptional births of consanguineous marriages was higher than that of non-consanguineous marriages. PMID:23626609
Ensemble Feature Learning of Genomic Data Using Support Vector Machine
Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.
2016-01-01
The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923
2014-01-01
normal and three different obstructed airway geometries, consisting of symmetric, asym- metric, and random obstructions. Fig. 2 shows the geometric ...normal and obstructed airways Airway resistance is a measure of the opposition to the airflow caused by geometric properties, such as airway obstruction...pressure drops. Resistance values were dependent on the degree and geometric distribution of the obstruction sites. In the symmetric obstruction model
Pressman, E K; Blakemore, K J
1996-10-01
Our purpose was to compare the effects of intrapartum amnioinfusion with normal saline solution versus lactated Ringer's solution plus physiologic glucose on neonatal electrolytes and acid-base balance. Patients undergoing amnioinfusion for obstetric indications were randomized to receive normal saline solution or lactated Ringer's solution plus physiologic glucose at standardized amnioinfusion rates. Data were collected prospectively on maternal demographics, course of labor, and maternal and neonatal outcome. Arterial cord blood was obtained for analysis of electrolytes, glucose, osmolality, lactic acid, and blood gases. Control subjects with normal fetal heart rate patterns, and clear amniotic fluid not receiving amnioinfusion were studied concurrently. Data were collected on 59 patients (21 normal saline solution, 18 lactated Ringer's solution plus physiologic glucose, and 20 controls). Maternal demographics, course of labor, and neonatal outcome were similar in all three groups. Cesarean sections were performed more often in the amnioinfusion groups (33.3% for normal saline solution, 38.9% for lactated Ringer's solution plus physiologic glucose) than in the control group (5.0%), p < 0.05. Cord arterial electrolytes, glucose, osmolality, lactic acid, and blood gases were not altered by amnioinfusion with either solution. Intrapartum amnioinfusion with normal saline solution or lactated Ringer's solution plus physiologic glucose has no effect on neonatal electrolytes or acid-base balance.
On Some Multiple Decision Problems
1976-08-01
parameter space. Some recent results in the area of subset selection formulation are Gnanadesikan and Gupta [28], Gupta and Studden [43], Gupta and...York, pp. 363-376. [27) Gnanadesikan , M. (1966). Some Selection and Ranking Procedures for Multivariate Normal Populations. Ph.D. Thesis. Dept. of...Statist., Purdue Univ., West Lafayette, Indiana 47907. [28) Gnanadesikan , M. and Gupta, S. S. (1970). Selection procedures for multivariate normal
Randomization Methods in Emergency Setting Trials: A Descriptive Review
ERIC Educational Resources Information Center
Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William
2016-01-01
Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…
Middle Level Practices in European International and Department of Defense Schools.
ERIC Educational Resources Information Center
Waggoner, V. Christine; McEwin, C. Kenneth
1993-01-01
Discusses results of a 1989-90 survey of 70 randomly selected international schools and 70 randomly selected Department of Defense Schools in Europe. Programs and practices surveyed included enrollments, grade organization, curriculum and instructional plans, core subjects, grouping patterns, exploratory courses, advisory programs, and scheduling.…
Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...
Zhang, Z B; Xue, Z X; Wu, X J; Wang, T M; Li, Y H; Song, X L; Chao, X F; Wang, G; Nazibam, Nurmamat; Ayxamgul, Bawudun; Gulbahar, Elyas; Zhou, Z Y; Sun, B S; Wang, Y Z; Wang, M
2017-06-10
Objective: To understand the prevalence of dyslipidemia and normal blood lipids level in Uygur diabetes patients in Kashgar prefecture in southern area of Xinjiang. Methods: A total of 5 078 local residents aged ≥18 years (42.56 % were men) selected through cluster random sampling in Kashgar were surveyed by means of questionnaire survey, physical examination and laboratory test, and 521 diabetes patients were screened. Results: The overall prevalence of dyslipidemia in diabetes patients was 59.50 % (310/521) with adjusted rate as 49.39 % . Age ≥65 years, overweight, obesity and abdominal obesity increased the risk for dyslipidemia by 0.771 times (95 % CI : 1.015-3.088), 1.132 times (95 % CI : 1.290-3.523), 1.688 times (95 % CI : 1.573-4.592) and 0.801 times (95 % CI : 1.028-3.155) respectively. Compared with males, female was a protective factor for dyslipidemia ( OR =0.507, 95 %CI : 0.334-0.769). The overall normal rate of blood lipids level including total cholesterol (TC), triglycerides (TG), high-density lipoprotein cholesterol (HDL-C) and low-density lipoprotein cholesterol (LDL-C) for type 2 diabetes patients was 11.13 % . Female, higher BMI and abdominal obesity were the factors influencing the overall normal blood lipids level. The normal rate of LDL-C level decreased with increase of age, BMI and waist circumferences (trend test χ (2)=18.049, P <0.001; trend test χ (2)=10.582, P =0.001; χ (2)=19.081, P <0.001), but increased with educational level (trend test χ (2)=9.764, P =0.002). Conclusion: The prevalence of dyslipidemia in Uygur diabetes patients in Kashgar was high, however, the overall normal rate of blood lipid level was relatively low. Obesity was the most important risk factor for dyslipidemia in this area. More attention should be paid to dyslipidemia prevention in women.
Fillenbaum, G G; Wilkinson, W E; Welsh, K A; Mohs, R C
1994-09-01
To identify minimal sets of Mini-Mental State Examination (MMSE) items that can distinguish normal control subjects from patients with mild Alzheimer's disease (AD), patients with mild from those with moderate AD, and those with moderate from those with severe AD. Two randomly selected equivalent half samples. Results of logistic regression analysis from data from the first half of the sample were confirmed by receiver operating characteristic curves on the second half. Memory disorders clinics at major medical centers in the United States affiliated with the Consortium to establish a Registry for Alzheimer's Disease (CERAD). White, normal control subjects (n = 412) and patients with AD (n = 621) who met CERAD criteria; nonwhite subjects (n = 165) and persons with missing data (n = 27) were excluded. Three four-item sets of MMSE items that discriminate, respectively, (1) normal controls from patients with mild AD, (2) patients with mild from those with moderate AD, and (3) patients with moderate from those with severe AD. The MMSE items discriminating normal controls from patients with mild AD were day, date, recall of apple, and recall of penny; those discriminating patients with mild from those with moderate AD were month, city, spelling world backward, and county, and those discriminating patients with moderate from those with severe AD were floor of building, repeating the word table, naming watch, and folding paper in half. Performance on the first two four-item sets was comparable with that of the full MMSE; the third set distinguished patients with moderate from those with severe AD better than chance. A minimum set of MMSE items can effectively discriminate normal controls from patients with mild AD and between successive levels of severity of AD. Data apply only to white patients with AD. Performance in minorities, more heterogeneous groups, or normal subjects with questionable cognitive status has not been assessed.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
A prediction of templates in the auditory cortex system
NASA Astrophysics Data System (ADS)
Ghanbeigi, Kimia
In this study variation of human auditory evoked mismatch field amplitudes in response to complex tones as a function of the removal in single partials in the onset period was investigated. It was determined: 1-A single frequency elimination in a sound stimulus plays a significant role in human brain sound recognition. 2-By comparing the mismatches of the brain response due to a single frequency elimination in the "Starting Transient" and "Sustain Part" of the sound stimulus, it is found that the brain is more sensitive to frequency elimination in the Starting Transient. This study involves 4 healthy subjects with normal hearing. Neural activity was recorded with stimulus whole-head MEG. Verification of spatial location in the auditory cortex was determined by comparing with MRI images. In the first set of stimuli, repetitive ('standard') tones with five selected onset frequencies were randomly embedded in the string of rare ('deviant') tones with randomly varying inter stimulus intervals. In the deviant tones one of the frequency components was omitted relative to the deviant tones during the onset period. The frequency of the test partial of the complex tone was intentionally selected to preclude its reinsertion by generation of harmonics or combination tones due to either the nonlinearity of the ear, the electronic equipment or the brain processing. In the second set of stimuli, time structured as above, repetitive ('standard') tones with five selected sustained frequency components were embedded in the string of rare '(deviant') tones for which one of these selected frequencies was omitted in the sustained tone. In both measurements, the carefully frequency selection precluded their reinsertion by generation of harmonics or combination tones due to the nonlinearity of the ear, the electronic equipment and brain processing. The same considerations for selecting the test frequency partial were applied. Results. By comparing MMN of the two data sets, the relative contribution to sound recognition of the omitted partial frequency components in the onset and sustained regions has been determined. Conclusion. The presence of significant mismatch negativity, due to neural activity of auditory cortex, emphasizes that the brain recognizes the elimination of a single frequency of carefully chosen anharmonic frequencies. It was shown this mismatch is more significant if the single frequency elimination occurs in the onset period.
Teeth eruption in children with normal occlusion and malocclusion.
Legović, Mario; Legović, Asja; Slaj, Martina; Mestrović, Senka; Lapter-Varga, Marina; Slaj, Mladen
2008-06-01
The aim of this study was to assess the differences in eruption of permanent teeth (C, P1, P2 and M2) in a group of children with and without malocclusion. A sample of 1758 children (921 boys and 837 girls), aged 8-13 was randomly selected. The subjects were grouped by chronological age (11 groups) and by presence of malocclusion. Statistically significant differences were found for both, upper and lower canines in the age group 11 (p<0.01). Statistically significant difference was found in the age group 8.5 for upper first (p<0.05), upper second premolars (p<0.01) in the age group 10, and the lower second premolars in the age group 11 (p<0.05). Premature loss of deciduous teeth caused early eruption of succedaneus permanent teeth, possibly leading to development of a malocclusion.
Sibling Conflict Resolution Skills: Assessment and Training
ERIC Educational Resources Information Center
Thomas, Brett W.; Roberts, Mark W.
2009-01-01
Sibling conflict can rise to the level of a clinical problem. In Phase 1 a lengthy behavioral role-play analog sampling child reactions to normal sibling conflicts was successfully shortened. In Phase 2 normal children who lacked sibling conflict resolution skills were randomly assigned to a Training or Measurement Only condition. Training…
Multivariate stochastic simulation with subjective multivariate normal distributions
P. J. Ince; J. Buongiorno
1991-01-01
In many applications of Monte Carlo simulation in forestry or forest products, it may be known that some variables are correlated. However, for simplicity, in most simulations it has been assumed that random variables are independently distributed. This report describes an alternative Monte Carlo simulation technique for subjectively assesed multivariate normal...
Atorvastatin in clinically-significant macular edema in diabetics with a normal lipid profile.
Narang, S; Sood, S; Kaur, B; Singh, R; Mallik, A; Kaur, J
2012-01-01
Lipid-lowering drugs preserve vision and reduce the risk of hard exudates in clinically-significant macular edema(CSME) in diabetics with an abnormal lipid profile. But their role in reducing CSME in diabetics with a normal lipid profile is not yet known. To evaluate the role of atorvastatin in CSME in diabetics with a normal lipid profile. A prospective, randomized clinical trial was carried out. Thirty CSME patients with a normal lipid profile were randomly divided into Group A and B. Atorvastatin had been started in Group A four weeks prior to laser treatment. The main outcome measures were any improvement or deterioration in visual acuity and macular edema and hard exudates at six months follow-up. Both the groups were compared using unpaired t test for quantitative parameters and chi-square test for qualitative parameters. A p value of less than 0.05 was taken as significant. Visual acuity, macular edema and hard exudates resolution was not significantly different in the two groups (P = 0.14, 0.62, 0.39 respectively). Atorvastatin does not affect treatment outcome in CSME with a normal lipid profile over a short term follow-up. © NEPjOPH.
Refractive error and visual impairment in private school children in Ghana.
Kumah, Ben D; Ebri, Anne; Abdul-Kabir, Mohammed; Ahmed, Abdul-Sadik; Koomson, Nana Ya; Aikins, Samual; Aikins, Amos; Amedo, Angela; Lartey, Seth; Naidoo, Kovin
2013-12-01
To assess the prevalence of refractive error and visual impairment in private school children in Ghana. A random selection of geographically defined classes in clusters was used to identify a sample of school children aged 12 to 15 years in the Ashanti Region. Children in 60 clusters were enumerated and examined in classrooms. The examination included visual acuity, retinoscopy, autorefraction under cycloplegia, and examination of anterior segment, media, and fundus. For quality assurance, a random sample of children with reduced and normal vision were selected and re-examined independently. A total of 2454 children attending 53 private schools were enumerated, and of these, 2435 (99.2%) were examined. Prevalence of uncorrected, presenting, and best visual acuity of 20/40 or worse in the better eye was 3.7, 3.5, and 0.4%, respectively. Refractive error was the cause of reduced vision in 71.7% of 152 eyes, amblyopia in 9.9%, retinal disorders in 5.9%, and corneal opacity in 4.6%. Exterior and anterior segment abnormalities occurred in 43 (1.8%) children. Myopia (at least -0.50 D) in one or both eyes was present in 3.2% of children when measured with retinoscopy and in 3.4% measured with autorefraction. Myopia was not significantly associated with gender (P = 0.82). Hyperopia (+2.00 D or more) in at least one eye was present in 0.3% of children with retinoscopy and autorefraction. The prevalence of reduced vision in Ghanaian private school children due to uncorrected refractive error was low. However, the prevalence of amblyopia, retinal disorders, and corneal opacities indicate the need for early interventions.
Prevalence and occupational associations of neck pain in the British population.
Palmer, K T; Walker-Bone, K; Griffin, M J; Syddall, H; Pannett, B; Coggon, D; Cooper, C
2001-02-01
This study determined the prevalence of neck pain and its relation to occupation and occupational activities in the general population. A questionnaire was mailed to 21 201 subjects aged 16-64 years, randomly selected from the patient registers of general practices in England, Scotland, and Wales, and to 993 subjects randomly selected from pay records of the armed services. Information was collected on occupation, workplace physical activities, neck pain in the past week and year, headaches, and feelings of tiredness or stress. Associations were explored by logistic regression, the resultant odds ratios being converted to prevalence ratios (PR). Among 12907 respondents, 4348 and 2528 reported neck pain in past year (1421 with pain interfering with normal activities) and week, respectively. Symptoms were the most prevalent among male construction workers [past week and year 24% and 38% (pain interfering with activities 11%), respectively], followed by nurses, armed services members, and the unemployed. Generally the age-standardized prevalence of neck pain varied little by occupation. Work with arms above the shoulders for >1 hours/day was associated with a significant excess of symptoms [PR 1.3-1.7 (women) and 1.2-1.4 (men)], but no associations existed for typing, lifting, vibratory tool use, or professional driving. Stronger neck-pain associations were found with frequent headaches (PR 2.3-2.8) and frequent tiredness or stress (PR 2.2-2.5) than with occupational activities. The data provide evidence against a strong association between neck pain and the examined occupational physical activities. They suggest that psychosocial factors may be more important.
Duesberg, Peter; McCormack, Amanda
2013-01-01
Immortality is a common characteristic of cancers, but its origin and purpose are still unclear. Here we advance a karyotypic theory of immortality based on the theory that carcinogenesis is a form of speciation. Accordingly, cancers are generated from normal cells by random karyotypic rearrangements and selection for cancer-specific reproductive autonomy. Since such rearrangements unbalance long-established mitosis genes, cancer karyotypes vary spontaneously but are stabilized perpetually by clonal selections for autonomy. To test this theory we have analyzed neoplastic clones, presumably immortalized by transfection with overexpressed telomerase or with SV40 tumor virus, for the predicted clonal yet flexible karyotypes. The following results were obtained: (1) All immortal tumorigenic lines from cells transfected with overexpressed telomerase had clonal and flexible karyotypes; (2) Searching for the origin of such karyotypes, we found spontaneously increasing, random aneuploidy in human fibroblasts early after transfection with overexpressed telomerase; (3) Late after transfection, new immortal tumorigenic clones with new clonal and flexible karyotypes were found; (4) Testing immortality of one clone during 848 unselected generations showed the chromosome number was stable, but the copy numbers of 36% of chromosomes drifted ± 1; (5) Independent immortal tumorigenic clones with individual, flexible karyotypes arose after individual latencies; (6) Immortal tumorigenic clones with new flexible karyotypes also arose late from cells of a telomerase-deficient mouse rendered aneuploid by SV40 virus. Because immortality and tumorigenicity: (1) correlated exactly with individual clonal but flexible karyotypes; (2) originated simultaneously with such karyotypes; and (3) arose in the absence of telomerase, we conclude that clonal and flexible karyotypes generate the immortality of cancers. PMID:23388461
Sun, Jin; Rutkoski, Jessica E; Poland, Jesse A; Crossa, José; Jannink, Jean-Luc; Sorrells, Mark E
2017-07-01
High-throughput phenotyping (HTP) platforms can be used to measure traits that are genetically correlated with wheat ( L.) grain yield across time. Incorporating such secondary traits in the multivariate pedigree and genomic prediction models would be desirable to improve indirect selection for grain yield. In this study, we evaluated three statistical models, simple repeatability (SR), multitrait (MT), and random regression (RR), for the longitudinal data of secondary traits and compared the impact of the proposed models for secondary traits on their predictive abilities for grain yield. Grain yield and secondary traits, canopy temperature (CT) and normalized difference vegetation index (NDVI), were collected in five diverse environments for 557 wheat lines with available pedigree and genomic information. A two-stage analysis was applied for pedigree and genomic selection (GS). First, secondary traits were fitted by SR, MT, or RR models, separately, within each environment. Then, best linear unbiased predictions (BLUPs) of secondary traits from the above models were used in the multivariate prediction models to compare predictive abilities for grain yield. Predictive ability was substantially improved by 70%, on average, from multivariate pedigree and genomic models when including secondary traits in both training and test populations. Additionally, (i) predictive abilities slightly varied for MT, RR, or SR models in this data set, (ii) results indicated that including BLUPs of secondary traits from the MT model was the best in severe drought, and (iii) the RR model was slightly better than SR and MT models under drought environment. Copyright © 2017 Crop Science Society of America.
Teriakidis, Adrianna; Willshaw, David J; Ribchester, Richard R
2012-10-01
During development, neurons form supernumerary synapses, most of which are selectively pruned leading to stereotyped patterns of innervation. During the development of skeletal muscle innervation, or its regeneration after nerve injury, each muscle fiber is transiently innervated by multiple motor axon branches but eventually by a single branch. The selective elimination of all but one branch is the result of competition between the converging arbors. It is thought that motor neurons initially innervate muscle fibers randomly, but that axon branches from the same neuron (sibling branches) do not converge to innervate the same muscle fiber. However, random innervation would result in many neonatal endplates that are co-innervated by sibling branches. To investigate whether this occurs we examined neonatal levator auris longus (LAL) and 4th deep lumbrical (4DL) muscles, as well as adult reinnervated deep lumbrical muscles (1-4) in transgenic mice expressing yellow fluorescent protein (YFP) as a reporter. We provide direct evidence of convergence of sibling neurites within single fluorescent motor units, both during development and during regeneration after nerve crush. The incidence of sibling neurite convergence was 40% lower in regeneration and at least 75% lower during development than expected by chance. Therefore, there must be a mechanism that decreases the probability of its occurrence. As sibling neurite convergence is not seen in normal adults, or at later timepoints in regeneration, synapse elimination must also remove convergent synaptic inputs derived from the same motor neuron. Mechanistic theories of synaptic competition should now accommodate this form of isoaxonal plasticity. Copyright © 2012 Wiley Periodicals, Inc.
Hamel, Sandra; Yoccoz, Nigel G; Gaillard, Jean-Michel
2017-05-01
Mixed models are now well-established methods in ecology and evolution because they allow accounting for and quantifying within- and between-individual variation. However, the required normal distribution of the random effects can often be violated by the presence of clusters among subjects, which leads to multi-modal distributions. In such cases, using what is known as mixture regression models might offer a more appropriate approach. These models are widely used in psychology, sociology, and medicine to describe the diversity of trajectories occurring within a population over time (e.g. psychological development, growth). In ecology and evolution, however, these models are seldom used even though understanding changes in individual trajectories is an active area of research in life-history studies. Our aim is to demonstrate the value of using mixture models to describe variation in individual life-history tactics within a population, and hence to promote the use of these models by ecologists and evolutionary ecologists. We first ran a set of simulations to determine whether and when a mixture model allows teasing apart latent clustering, and to contrast the precision and accuracy of estimates obtained from mixture models versus mixed models under a wide range of ecological contexts. We then used empirical data from long-term studies of large mammals to illustrate the potential of using mixture models for assessing within-population variation in life-history tactics. Mixture models performed well in most cases, except for variables following a Bernoulli distribution and when sample size was small. The four selection criteria we evaluated [Akaike information criterion (AIC), Bayesian information criterion (BIC), and two bootstrap methods] performed similarly well, selecting the right number of clusters in most ecological situations. We then showed that the normality of random effects implicitly assumed by evolutionary ecologists when using mixed models was often violated in life-history data. Mixed models were quite robust to this violation in the sense that fixed effects were unbiased at the population level. However, fixed effects at the cluster level and random effects were better estimated using mixture models. Our empirical analyses demonstrated that using mixture models facilitates the identification of the diversity of growth and reproductive tactics occurring within a population. Therefore, using this modelling framework allows testing for the presence of clusters and, when clusters occur, provides reliable estimates of fixed and random effects for each cluster of the population. In the presence or expectation of clusters, using mixture models offers a suitable extension of mixed models, particularly when evolutionary ecologists aim at identifying how ecological and evolutionary processes change within a population. Mixture regression models therefore provide a valuable addition to the statistical toolbox of evolutionary ecologists. As these models are complex and have their own limitations, we provide recommendations to guide future users. © 2016 Cambridge Philosophical Society.
Adaptive Electronic Camouflage Using Texture Synthesis
2012-04-01
algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a
Normal and tumoral melanocytes exhibit q-Gaussian random search patterns.
da Silva, Priscila C A; Rosembach, Tiago V; Santos, Anésia A; Rocha, Márcio S; Martins, Marcelo L
2014-01-01
In multicellular organisms, cell motility is central in all morphogenetic processes, tissue maintenance, wound healing and immune surveillance. Hence, failures in its regulation potentiates numerous diseases. Here, cell migration assays on plastic 2D surfaces were performed using normal (Melan A) and tumoral (B16F10) murine melanocytes in random motility conditions. The trajectories of the centroids of the cell perimeters were tracked through time-lapse microscopy. The statistics of these trajectories was analyzed by building velocity and turn angle distributions, as well as velocity autocorrelations and the scaling of mean-squared displacements. We find that these cells exhibit a crossover from a normal to a super-diffusive motion without angular persistence at long time scales. Moreover, these melanocytes move with non-Gaussian velocity distributions. This major finding indicates that amongst those animal cells supposedly migrating through Lévy walks, some of them can instead perform q-Gaussian walks. Furthermore, our results reveal that B16F10 cells infected by mycoplasmas exhibit essentially the same diffusivity than their healthy counterparts. Finally, a q-Gaussian random walk model was proposed to account for these melanocytic migratory traits. Simulations based on this model correctly describe the crossover to super-diffusivity in the cell migration tracks.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parker, S
2015-06-15
Purpose: To evaluate the ability of statistical process control methods to detect systematic errors when using a two dimensional (2D) detector array for routine electron beam energy verification. Methods: Electron beam energy constancy was measured using an aluminum wedge and a 2D diode array on four linear accelerators. Process control limits were established. Measurements were recorded in control charts and compared with both calculated process control limits and TG-142 recommended specification limits. The data was tested for normality, process capability and process acceptability. Additional measurements were recorded while systematic errors were intentionally introduced. Systematic errors included shifts in the alignmentmore » of the wedge, incorrect orientation of the wedge, and incorrect array calibration. Results: Control limits calculated for each beam were smaller than the recommended specification limits. Process capability and process acceptability ratios were greater than one in all cases. All data was normally distributed. Shifts in the alignment of the wedge were most apparent for low energies. The smallest shift (0.5 mm) was detectable using process control limits in some cases, while the largest shift (2 mm) was detectable using specification limits in only one case. The wedge orientation tested did not affect the measurements as this did not affect the thickness of aluminum over the detectors of interest. Array calibration dependence varied with energy and selected array calibration. 6 MeV was the least sensitive to array calibration selection while 16 MeV was the most sensitive. Conclusion: Statistical process control methods demonstrated that the data distribution was normally distributed, the process was capable of meeting specifications, and that the process was centered within the specification limits. Though not all systematic errors were distinguishable from random errors, process control limits increased the ability to detect systematic errors using routine measurement of electron beam energy constancy.« less
Fritz, Stacy L; Peters, Denise M; Merlo, Angela M; Donley, Jonathan
2013-01-01
Treatments that provide feedback, increase practice with multiple repetitions, and motivate patients are essential to rehabilitation post stroke. To determine whether playing active video games results in improved balance and mobility post stroke. Thirty participants with chronic (time since stroke = 3.0 [2.9] years) hemiparesis post stroke were randomly assigned to a gaming group or normal activity control group. Gaming systems provided participants with an interactive interface of real-time movement of either themselves or an avatar on the screen. Participants played games 50-60 minutes/day, 4 days/week, for 5 weeks. The intervention was strictly game-play, in standing position, without physical therapy. The control group received no special intervention and continued with normal activity. Both groups were tested prior to, following the 5 weeks (post test), and 3 months following the completion of the study. Outcome measures included the Fugl-Meyer Assessment, Berg Balance Scale, Dynamic Gait Index, Timed Up & Go, 6-minute walk test, 3-meter walk (self-selected and fast), and perception of recovery. No statistically significant differences between or within groups were found through analysis of covariance (covaried for side of hemiparesis) at post test or follow-up. Although the within-group effect sizes were primarily indexed as "small" (< .36), the gaming group exhibited higher within-group effect sizes before and after testing than did the control group on all 7 dependent variables analyzed. Even though the only intervention was game-play, there were small positive effects. Therapist assistance in making more optimum movement choices may be needed before significant improvements are seen with commercially available, general purpose games.
Latent stereopsis for motion in depth in strabismic amblyopia.
Hess, Robert F; Mansouri, Behzad; Thompson, Benjamin; Gheorghiu, Elena
2009-10-01
To investigate the residual stereo function of a group of 15 patients with strabismic amblyopia, by using motion-in-depth stimuli that allow discrimination of contributions from local disparity as opposed to those from local velocity mechanisms as a function of the rate of depth change. The stereo performance (percentage correct) was measured as a function of the rate of depth change for dynamic random dot stimuli that were either temporally correlated or uncorrelated. Residual stereoscopic function was demonstrated for motion in depth based on local disparity information in 2 of the 15 observers with strabismic amblyopia. The use of a neutral-density (ND) filter in front of the fixing eye enhanced motion-in-depth performance in four subjects randomly selected from the group that originally displayed only chance performance. This finding was true across temporal rate and for correlated and uncorrelated stimuli, suggesting that it was disparity based. The opposite occurred in a group of normal subjects. In a separate experiment, the hypothesis was that the beneficial effect of the ND filter is due to its contrast and/or mean luminance-reducing effects rather than any interocular time delay that it may introduce and that it is specific to motion-in-depth performance, as similar improvements were not found for static stereopsis. A small proportion of observers with strabismic amblyopia exhibit residual performance for motion in depth, and it is disparity based. Furthermore, some observers with strabismic amblyopia who do not display any significant stereo performance for motion in depth under normal binocular viewing may display above-chance stereo performance if the degree of interocular suppression is reduced. The authors term this phenomenon latent stereopsis.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
Williams, Leanne M; Korgaonkar, Mayuresh S; Song, Yun C; Paton, Rebecca; Eagles, Sarah; Goldstein-Piekarski, Andrea; Grieve, Stuart M; Harris, Anthony W F; Usherwood, Tim; Etkin, Amit
2015-01-01
Although the cost of poor treatment outcomes of depression is staggering, we do not yet have clinically useful methods for selecting the most effective antidepressant for each depressed person. Emotional brain activation is altered in major depressive disorder (MDD) and implicated in treatment response. Identifying which aspects of emotional brain activation are predictive of general and specific responses to antidepressants may help clinicians and patients when making treatment decisions. We examined whether amygdala activation probed by emotion stimuli is a general or differential predictor of response to three commonly prescribed antidepressants, using functional magnetic resonance imaging (fMRI). A test–retest design was used to assess patients with MDD in an academic setting as part of the International Study to Predict Optimized Treatment in Depression. A total of 80 MDD outpatients were scanned prior to treatment and 8 weeks after randomization to the selective serotonin reuptake inhibitors escitalopram and sertraline and the serotonin–norepinephrine reuptake inhibitor, venlafaxine-extended release (XR). A total of 34 matched controls were scanned at the same timepoints. We quantified the blood oxygen level-dependent signal of the amygdala during subliminal and supraliminal viewing of facial expressions of emotion. Response to treatment was defined by ⩾50% symptom improvement on the 17-item Hamilton Depression Rating Scale. Pre-treatment amygdala hypo-reactivity to subliminal happy and threat was a general predictor of treatment response, regardless of medication type (Cohen's d effect size 0.63 to 0.77; classification accuracy, 75%). Responders showed hypo-reactivity compared to controls at baseline, and an increase toward ‘normalization' post-treatment. Pre-treatment amygdala reactivity to subliminal sadness was a differential moderator of non-response to venlafaxine-XR (Cohen's d effect size 1.5; classification accuracy, 81%). Non-responders to venlafaxine-XR showed pre-treatment hyper-reactivity, which progressed to hypo-reactivity rather than normalization post-treatment, and hypo-reactivity post-treatment was abnormal compared to controls. Impaired amygdala activation has not previously been highlighted in the general vs differential prediction of antidepressant outcomes. Amygdala hypo-reactivity to emotions signaling reward and threat predicts the general capacity to respond to antidepressants. Amygdala hyper-reactivity to sad emotion is involved in a specific non-response to a serotonin–norepinephrine reuptake inhibitor. The findings suggest amygdala probes may help inform the personal selection of antidepressant treatments. PMID:25824424
Assessment of the hygienic performances of hamburger patty production processes.
Gill, C O; Rahn, K; Sloan, K; McMullen, L M
1997-05-20
The hygienic conditions of the hamburger patties collected from three patty manufacturing plants and six retail outlets were examined. At each manufacturing plant a sample from newly formed, chilled patties and one from frozen patties were collected from each of 25 batches of patties selected at random. At three, two or one retail outlet, respectively, 25 samples from frozen, chilled or both frozen and chilled patties were collected at random. Each sample consisted of 30 g of meat obtained from five or six patties. Total aerobic, coliform and Escherichia coli counts per gram were enumerated for each sample. The mean log (x) and standard deviation (s) were calculated for the log10 values for each set of 25 counts, on the assumption that the distribution of counts approximated the log normal. A value for the log10 of the arithmetic mean (log A) was calculated for each set from the values of x and s. A chi2 statistic was calculated for each set as a test of the assumption of the log normal distribution. The chi2 statistic was calculable for 32 of the 39 sets. Four of the sets gave chi2 values indicative of gross deviation from log normality. On inspection of those sets, distributions obviously differing from the log normal were apparent in two. Log A values for total, coliform and E. coli counts for chilled patties from manufacturing plants ranged from 4.4 to 5.1, 1.7 to 2.3 and 0.9 to 1.5, respectively. Log A values for frozen patties from manufacturing plants were between < 0.1 and 0.5 log10 units less than the equivalent values for chilled patties. Log A values for total, coliform and E. coli counts for frozen patties on retail sale ranged from 3.8 to 8.5, < 0.5 to 3.6 and < 0 to 1.9, respectively. The equivalent ranges for chilled patties on retail sale were 4.8 to 8.5, 1.8 to 3.7 and 1.4 to 2.7, respectively. The findings indicate that the general hygienic condition of hamburgers patties could be improved by their being manufactured from only manufacturing beef of superior hygienic quality, and by the better management of chilled patties at retail outlets.
Postprandial glucose response to selected tropical fruits in normal glucose-tolerant Nigerians.
Edo, A; Eregie, A; Adediran, O; Ohwovoriole, A; Ebengho, S
2011-01-01
The glycemic response to commonly eaten fruits in Nigeria has not been reported. Therefore, this study assessed the plasma glucose response to selected fruits in Nigeria. Ten normal glucose-tolerant subjects randomly consumed 50 g carbohydrate portions of three fruits: banana (Musa paradisiaca), pineapple (Ananus comosus), and pawpaw (Carica papaya), and a 50-g glucose load at 1-week intervals. Blood samples were collected in the fasting state and half-hourly over a 2-h period post-ingestion of the fruits or glucose. The samples were analyzed for plasma glucose concentrations. Plasma glucose responses were assessed by the peak plasma glucose concentration, maximum increase in plasma glucose, 2-h postprandial plasma glucose level, and incremental area under the glucose curve and glycemic index (GI). The results showed that the blood glucose response to these three fruits was similar in terms of their incremental areas under the glucose curve, maximum increase in plasma glucose, and glycemic indices (GIs). The 2-h postprandial plasma glucose level of banana was significantly higher than that of pineapple, P < 0.025. The mean ± SEM GI values were as follows: pawpaw; 86 ± 26.8%; banana, 75.1 ± 21.8%; pineapple, 64.5 ± 11.3%. The GI of glucose is taken as 100. The GI of pineapple was significantly lower than that of glucose (P < 0.05). Banana, pawpaw, and pineapple produced a similar postprandial glucose response. Measured portions of these fruits may be used as fruit exchanges with pineapple having the most favorable glycemic response.
Application of k-means clustering algorithm in grouping the DNA sequences of hepatitis B virus (HBV)
NASA Astrophysics Data System (ADS)
Bustamam, A.; Tasman, H.; Yuniarti, N.; Frisca, Mursidah, I.
2017-07-01
Based on WHO data, an estimated of 15 millions people worldwide who are infected with hepatitis B (HBsAg+), which is caused by HBV virus, are also infected by hepatitis D, which is caused by HDV virus. Hepatitis D infection can occur simultaneously with hepatitis B (co infection) or after a person is exposed to chronic hepatitis B (super infection). Since HDV cannot live without HBV, HDV infection is closely related to HBV infection, hence it is very realistic that every effort of prevention against hepatitis B can indirectly prevent hepatitis D. This paper presents clustering of HBV DNA sequences by using k-means clustering algorithm and R programming. Clustering processes are started with collecting HBV DNA sequences from GenBank, then performing extraction HBV DNA sequences using n-mers frequency and furthermore the extraction results are collected as a matrix and normalized using the min-max normalization with interval [0, 1] which will later be used as an input data. The number of clusters is two and the initial centroid selected of the cluster is chosen randomly. In each iteration, the distance of every object to each centroid are calculated using the Euclidean distance and the minimum distance is selected to determine the membership in a cluster until two convergent clusters are created. As the result, the HBV viruses in the first cluster is more virulent than the HBV viruses in the second cluster, so the HBV viruses in the first cluster can potentially evolve with HDV viruses that cause hepatitis D.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
The Effect of CAI on Reading Achievement.
ERIC Educational Resources Information Center
Hardman, Regina
A study determined whether computer assisted instruction (CAI) had an effect on students' reading achievement. Subjects were 21 randomly selected fourth-grade students at D. S. Wentworth Elementary School on the south side of Chicago in a low-income neighborhood who received a year's exposure to a CAI program, and 21 randomly selected students at…
78 FR 57033 - United States Standards for Condition of Food Containers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...
Access to Higher Education by the Luck of the Draw
ERIC Educational Resources Information Center
Stone, Peter
2013-01-01
Random selection is a fair way to break ties between applicants of equal merit seeking admission to institutions of higher education (with "merit" defined here in terms of the intrinsic contribution higher education would make to the applicant's life). Opponents of random selection commonly argue that differences in strength between…
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
1977 Survey of the American Professoriate. Technical Report.
ERIC Educational Resources Information Center
Ladd, Everett Carll, Jr.; And Others
The development and data validation of the 1977 Ladd-Lipset national survey of the American professoriate are described. The respondents were selected from a random sample of colleges and universities and from a random sample of individual faculty members from the universities. The 158 institutions in the 1977 survey were selected from 2,406…
Site Selection in Experiments: A Follow-Up Evaluation of Site Recruitment in Two Scale-Up Studies
ERIC Educational Resources Information Center
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica
2015-01-01
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.
Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide
2011-01-01
Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.
ERIC Educational Resources Information Center
Hesselmark, Eva; Plenty, Stephanie; Bejerot, Susanne
2014-01-01
Although adults with autism spectrum disorder are an increasingly identified patient population, few treatment options are available. This "preliminary" randomized controlled open trial with a parallel design developed two group interventions for adults with autism spectrum disorders and intelligence within the normal range: cognitive…
Risk analytics for hedge funds
NASA Astrophysics Data System (ADS)
Pareek, Ankur
2005-05-01
The rapid growth of the hedge fund industry presents significant business opportunity for the institutional investors particularly in the form of portfolio diversification. To facilitate this, there is a need to develop a new set of risk analytics for investments consisting of hedge funds, with the ultimate aim to create transparency in risk measurement without compromising the proprietary investment strategies of hedge funds. As well documented in the literature, use of dynamic options like strategies by most of the hedge funds make their returns highly non-normal with fat tails and high kurtosis, thus rendering Value at Risk (VaR) and other mean-variance analysis methods unsuitable for hedge fund risk quantification. This paper looks at some unique concerns for hedge fund risk management and will particularly concentrate on two approaches from physical world to model the non-linearities and dynamic correlations in hedge fund portfolio returns: Self Organizing Criticality (SOC) and Random Matrix Theory (RMT).Random Matrix Theory analyzes correlation matrix between different hedge fund styles and filters random noise from genuine correlations arising from interactions within the system. As seen in the results of portfolio risk analysis, it leads to a better portfolio risk forecastability and thus to optimum allocation of resources to different hedge fund styles. The results also prove the efficacy of self-organized criticality and implied portfolio correlation as a tool for risk management and style selection for portfolios of hedge funds, being particularly effective during non-linear market crashes.
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
Valenzuela, Carlos Y
2013-01-01
The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
Testing the potential paradoxes in "retrocausal" phenomena
NASA Astrophysics Data System (ADS)
Jolij, Jacob; Bierman, Dick J.
2017-05-01
Discussions with regard to potential paradoxes arising from "retrocausal" phenomena have been purely theoretical because so far no empirical effects had been established that allowed for empirical exploration of these potential paradoxes. In this article we describe three human experiments that showed clear "retrocausal" effects. In these neuropsychological, so-called, face-detection experiments, consisting of hundreds of trials per participant, we use brain signals to predict an upcoming random stimulus. The binary random decision, corresponding to showing a noisy cartoon face or showing only noise on a display with equal probability is taken after the brain signals have been measured. The prediction accuracy ranges from 50.5-56.5% for the 3 experiments where chance performance would be 50%. The prediction algorithm is based on a template constructed out of all the pre-stimulus brain signals obtained in other trials of that particular participant. This approach thus controls for individual difference in brain functioning. Subsequently we describe an experiment based upon these findings where the predictive information is used in part of the trials to determine the stimulus rather than randomly select that stimulus. In those trials we analyze what the brain signals tell us what the future stimulus would be and then we reverse the actual future that is presented on the display. This is a `bilking' condition. We analyze what the consequence of the introduction of this bilking condition is on the accuracy of the remaining (normal) trials and, following a suggestion inferred from Thorne et al, we also check what the effect is on the random decision to either bilk or not bilk the specific trial. The bilking experiment is in progress and the results so far do not allow for conclusions and are presented only as an illustration.
Li, Xianbin; Tang, Yilang; Wang, Chuanyue
2013-01-01
Objective To compare the safety and efficacy of adjunctive aripiprazole versus placebo for antipsychotic-induced hyperprolactinemia. Methods Population: adult patients presenting with antipsychotic-induced hyperprolactinemia diagnosed by prolactin level with or without prolactin-related symptoms. Interventions: adjunctive aripiprazole vs. adjunctive placebo. Outcome measures: adverse events and efficacy of treatment. Studies: randomized controlled trials. Results Five randomized controlled trials with a total of 639 patients (326 adjunctive aripiprazole, 313 adjunctive placebo) met the inclusion criteria. Adjunctive aripiprazole was associated with a 79.11% (125/158) prolactin level normalization rate. Meta-analysis of insomnia, headache, sedation, psychiatric disorder, extrapyramidal symptom, dry mouth, and fatigue showed no significant differences in the adjunctive aripiprazole treatment group compared with the placebo group (risk difference (Mantel-Haenszel, random or fixed) −0.05 to 0.04 (95% confidence interval −0.13 to 0.16); I2 = 0% to 68%, P = 0.20 to 0.70). However, sedation, insomnia, and headache were more frequent when the adjunctive aripiprazole dose was higher than 15 mg/day. Meta-analysis of the prolactin level normalization indicated adjunctive aripiprazole was superior to placebo (risk difference (Mantel-Haenszel, random) 0.76 (95% confidence interval 0.67 to 0.85); I2 = 43%, P<0.00001). The subgroup analysis confirmed that the subjects who received adjunctive aripiprazole 5 mg/day showed a degree of prolactin normalization similar to that of all participants. No significant differences between groups in discontinuation and improvements of psychiatric symptoms. Conclusion Adjunctive aripiprazole is both safe and effective as a reasonable choice treatment for patients with antipsychotic-induced hyperprolactinemia. The appropriate dose of adjunctive aripiprazole may be 5 mg/day. PMID:23936389
Cooperation and charity in spatial public goods game under different strategy update rules
NASA Astrophysics Data System (ADS)
Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin
2010-03-01
Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.
Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A
2017-09-15
Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Edmands, William M B; Ferrari, Pietro; Scalbert, Augustin
2014-11-04
Extraction of meaningful biological information from urinary metabolomic profiles obtained by liquid-chromatography coupled to mass spectrometry (MS) necessitates the control of unwanted sources of variability associated with large differences in urine sample concentrations. Different methods of normalization either before analysis (preacquisition normalization) through dilution of urine samples to the lowest specific gravity measured by refractometry, or after analysis (postacquisition normalization) to urine volume, specific gravity and median fold change are compared for their capacity to recover lead metabolites for a potential future use as dietary biomarkers. Twenty-four urine samples of 19 subjects from the European Prospective Investigation into Cancer and nutrition (EPIC) cohort were selected based on their high and low/nonconsumption of six polyphenol-rich foods as assessed with a 24 h dietary recall. MS features selected on the basis of minimum discriminant selection criteria were related to each dietary item by means of orthogonal partial least-squares discriminant analysis models. Normalization methods ranked in the following decreasing order when comparing the number of total discriminant MS features recovered to that obtained in the absence of normalization: preacquisition normalization to specific gravity (4.2-fold), postacquisition normalization to specific gravity (2.3-fold), postacquisition median fold change normalization (1.8-fold increase), postacquisition normalization to urinary volume (0.79-fold). A preventative preacquisition normalization based on urine specific gravity was found to be superior to all curative postacquisition normalization methods tested for discovery of MS features discriminant of dietary intake in these urinary metabolomic datasets.
Levels of Conformity to Leader in Normal and Critical Situations
ERIC Educational Resources Information Center
Gündüz, Yüksel
2017-01-01
The aim of this study is to determine primary school, middle school, high school and university students' levels of conformity to leader in normal and critical situations. Experimental model was used in the research. Study group is comprised of 80 students chosen randomly from Karadeniz Bakir Primary School, Gazi Middle School, Kazim Karabekir…
ERIC Educational Resources Information Center
Bulcock, J. W.; And Others
Advantages of normalization regression estimation over ridge regression estimation are demonstrated by reference to Bloom's model of school learning. Theoretical concern centered on the structure of scholastic achievement at grade 10 in Canadian high schools. Data on 886 students were randomly sampled from the Carnegie Human Resources Data Bank.…
Warren, Michelle P; Brooks-Gunn, Jeanne; Fox, Richard P; Holderness, Claire C; Hyle, Emily P; Hamilton, William G; Hamilton, Linda
2003-08-01
To investigate the role of estrogen deprivation and replacement in amenorrheic and nonamenorrheic dancers on hormone therapy and calcium. Clinical, placebo-controlled, randomized trial study.Healthy volunteers in an academic research environment. Fifty-five dancers (mean age: 22.0 +/- 4.6, age at menarche: 14.7 +/- 2.3 years), including 24 amenorrheics. Amenorrheics were randomized in a controlled trial to receive placebo or Premarin, 0.625 mg for 25 days monthly, with Provera, 10 mg, for 10 of these 25 days (hormone therapy) for 2 years. These women were compared to normally menstruating controls. The study participants also received 1250 mg of calcium per day. Bone mineral density (BMD) measured at the foot, wrist, and lumbar spine. Our overall results showed no difference in BMD between the treated or placebo groups, indicating that hormone therapy did not change or normalize BMD when compared to normals. Five patients (all on placebo) who resumed menses during the study showed an increase in BMD without normalization. These findings suggest that mechanisms other than hypoestrogenism may be involved with the osteopenia associated with exercise-induced amenorrhea.
Laplacian normalization and random walk on heterogeneous networks for disease-gene prioritization.
Zhao, Zhi-Qin; Han, Guo-Sheng; Yu, Zu-Guo; Li, Jinyan
2015-08-01
Random walk on heterogeneous networks is a recently emerging approach to effective disease gene prioritization. Laplacian normalization is a technique capable of normalizing the weight of edges in a network. We use this technique to normalize the gene matrix and the phenotype matrix before the construction of the heterogeneous network, and also use this idea to define the transition matrices of the heterogeneous network. Our method has remarkably better performance than the existing methods for recovering known gene-phenotype relationships. The Shannon information entropy of the distribution of the transition probabilities in our networks is found to be smaller than the networks constructed by the existing methods, implying that a higher number of top-ranked genes can be verified as disease genes. In fact, the most probable gene-phenotype relationships ranked within top 3 or top 5 in our gene lists can be confirmed by the OMIM database for many cases. Our algorithms have shown remarkably superior performance over the state-of-the-art algorithms for recovering gene-phenotype relationships. All Matlab codes can be available upon email request. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sleep deprivation impairs object-selective attention: a view from the ventral visual cortex.
Lim, Julian; Tan, Jiat Chow; Parimal, Sarayu; Dinges, David F; Chee, Michael W L
2010-02-05
Most prior studies on selective attention in the setting of total sleep deprivation (SD) have focused on behavior or activation within fronto-parietal cognitive control areas. Here, we evaluated the effects of SD on the top-down biasing of activation of ventral visual cortex and on functional connectivity between cognitive control and other brain regions. Twenty-three healthy young adult volunteers underwent fMRI after a normal night of sleep (RW) and after sleep deprivation in a counterbalanced manner while performing a selective attention task. During this task, pictures of houses or faces were randomly interleaved among scrambled images. Across different blocks, volunteers responded to house but not face pictures, face but not house pictures, or passively viewed pictures without responding. The appearance of task-relevant pictures was unpredictable in this paradigm. SD resulted in less accurate detection of target pictures without affecting the mean false alarm rate or response time. In addition to a reduction of fronto-parietal activation, attending to houses strongly modulated parahippocampal place area (PPA) activation during RW, but this attention-driven biasing of PPA activation was abolished following SD. Additionally, SD resulted in a significant decrement in functional connectivity between the PPA and two cognitive control areas, the left intraparietal sulcus and the left inferior frontal lobe. SD impairs selective attention as evidenced by reduced selectivity in PPA activation. Further, reduction in fronto-parietal and ventral visual task-related activation suggests that it also affects sustained attention. Reductions in functional connectivity may be an important additional imaging parameter to consider in characterizing the effects of sleep deprivation on cognition.
Effect of using different cover image quality to obtain robust selective embedding in steganography
NASA Astrophysics Data System (ADS)
Abdullah, Karwan Asaad; Al-Jawad, Naseer; Abdulla, Alan Anwer
2014-05-01
One of the common types of steganography is to conceal an image as a secret message in another image which normally called a cover image; the resulting image is called a stego image. The aim of this paper is to investigate the effect of using different cover image quality, and also analyse the use of different bit-plane in term of robustness against well-known active attacks such as gamma, statistical filters, and linear spatial filters. The secret messages are embedded in higher bit-plane, i.e. in other than Least Significant Bit (LSB), in order to resist active attacks. The embedding process is performed in three major steps: First, the embedding algorithm is selectively identifying useful areas (blocks) for embedding based on its lighting condition. Second, is to nominate the most useful blocks for embedding based on their entropy and average. Third, is to select the right bit-plane for embedding. This kind of block selection made the embedding process scatters the secret message(s) randomly around the cover image. Different tests have been performed for selecting a proper block size and this is related to the nature of the used cover image. Our proposed method suggests a suitable embedding bit-plane as well as the right blocks for the embedding. Experimental results demonstrate that different image quality used for the cover images will have an effect when the stego image is attacked by different active attacks. Although the secret messages are embedded in higher bit-plane, but they cannot be recognised visually within the stegos image.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
Babaheydari, Samad Bahrami; Keyvanshokooh, Saeed; Dorafshan, Salar; Johari, Seyed Ali
2016-03-01
The aim of the present study was to explore proteome changes in rainbow trout (Oncorhynchus mykiss) fertilized eggs as an effect of triploidization heat-shock treatment. Eggs and milt were taken from eight females and six males. The gametes were pooled to minimize the individual differences. After insemination, the eggs were incubated at 10°C for 10min. Half of the fertilized eggs were then subjected to heat shock for 10min submerged in a 28°C water bath to induce triploidy. The remainder were incubated normally and used as diploid controls. Three batches of eggs were randomly selected from each group and were incubated at 10-11°C under the same environmental conditions in hatchery troughs until the fry stage. Triplicate samples of 30 eggs (10 eggs per trough) from each group were randomly selected 1.5h post-fertilization for proteome extraction. Egg proteins were analyzed using two-dimensional electrophoresis (2-DE) and MALDI-TOF/TOF mass spectrometry. Based on the results from the statistical analyses, 15 protein spots were found to decrease significantly in abundance in heat-shock treated group and were selected for identification. Out of 15 protein spots showing altered abundance, 14 spots were successfully identified. All of the egg proteins identified in our study were related to vitellogenin (vtg). Decreased abundance of vitellogenin in heat-shock treated eggs in our study may either be explained by (i) higher utilization of vtg as an effect of increased cell size in triploids or (ii) changed metabolism in response to heat-shock stress and (iii) diffusion of vtg through chorion due to incidence of egg shell damage. Decreased abundance of vitellogenin in heat-shock treated eggs was associated with reduced early survival rates and lowered growth performance of triploid fish. Copyright © 2016 Elsevier B.V. All rights reserved.
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
NASA Astrophysics Data System (ADS)
Holland, Katharina; van Gils, Carla H.; Wanders, Johanna OP; Mann, Ritse M.; Karssemeijer, Nico
2016-03-01
The sensitivity of mammograms is low for women with dense breasts, since cancers may be masked by dense tissue. In this study, we investigated methods to identify women with density patterns associated with a high masking risk. Risk measures are derived from volumetric breast density maps. We used the last negative screening mammograms of 93 women who subsequently presented with an interval cancer (IC), and, as controls, 930 randomly selected normal screening exams from women without cancer. Volumetric breast density maps were computed from the mammograms, which provide the dense tissue thickness at each location. These were used to compute absolute and percentage glandular tissue volume. We modeled the masking risk for each pixel location using the absolute and percentage dense tissue thickness and we investigated the effect of taking the cancer location probability distribution (CLPD) into account. For each method, we selected cases with the highest masking measure (by thresholding) and computed the fraction of ICs as a function of the fraction of controls selected. The latter can be interpreted as the negative supplemental screening rate (NSSR). Between the models, when incorporating CLPD, no significant differences were found. In general, the methods performed better when CLPD was included. At higher NSSRs some of the investigated masking measures had a significantly higher performance than volumetric breast density. These measures may therefore serve as an alternative to identify women with a high risk for a masked cancer.
Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method
NASA Astrophysics Data System (ADS)
Shamsoddini, A.; Aboodi, M. R.; Karami, J.
2017-09-01
Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Kargar, Roxana; Aghazadeh-Nainie, Afsaneh; Khoddami-Vishteh, Hamid Reza
2016-01-01
Objective: To compare the efficacy of EMLA cream and lidocaine injection to reduce pain during episiotomy repair. Materials and methods: A total of 46 primiparous women with normal pregnancy who referred for normal vaginal delivery and needed episiotomy repair were selected and randomly divided into two groups. For EMLA group, one hour before the estimated time of delivery, 5g of EMLA cream was applied to perinealmediolateral incision, and after the delivery of the fetus and placenta, again 5g of EMLA cream was applied to healthy skin around the episiotomy for repair. In the other group, lidocaine 2% was used before episiotomy and for its repair, too. Results: Only 8 people (19%) were in need of further analgesia. The mean ± SD of pain during repair of episiotomy on the VAS scale in all cases was 4.2 ± 2.3 cm. Most people (97%) were satisfied with their episiotomy repair. Comparing the two groups of EMLA and lidocaine, there was no difference between the two groups in terms of the duration of episiotomy repair, need for further analgesia, pain on the VAS scale, and satisfaction with the repair method. Conclusion: The findings of this study showed that the use of EMLA cream in the site of episiotomy incision in primiparous women can induce a level of analgesia equal to that of lidocaine, and cause a similar level of satisfaction. PMID:27385970
Classification of optical coherence tomography images for diagnosing different ocular diseases
NASA Astrophysics Data System (ADS)
Gholami, Peyman; Sheikh Hassani, Mohsen; Kuppuswamy Parthasarathy, Mohana; Zelek, John S.; Lakshminarayanan, Vasudevan
2018-03-01
Optical Coherence tomography (OCT) images provide several indicators, e.g., the shape and the thickness of different retinal layers, which can be used for various clinical and non-clinical purposes. We propose an automated classification method to identify different ocular diseases, based on the local binary pattern features. The database consists of normal and diseased human eye SD-OCT images. We use a multiphase approach for building our classifier, including preprocessing, Meta learning, and active learning. Pre-processing is applied to the data to handle missing features from images and replace them with the mean or median of the corresponding feature. All the features are run through a Correlation-based Feature Subset Selection algorithm to detect the most informative features and omit the less informative ones. A Meta learning approach is applied to the data, in which a SVM and random forest are combined to obtain a more robust classifier. Active learning is also applied to strengthen our classifier around the decision boundary. The primary experimental results indicate that our method is able to differentiate between the normal and non-normal retina with an area under the ROC curve (AUC) of 98.6% and also to diagnose the three common retina-related diseases, i.e., Age-related Macular Degeneration, Diabetic Retinopathy, and Macular Hole, with an AUC of 100%, 95% and 83.8% respectively. These results indicate a better performance of the proposed method compared to most of the previous works in the literature.
The ethnophysiology of digestion and diarrhoea in a Bangladeshi hospital population.
Zeitlyn, S; Rowshan, R; Mahalanabis, D; Faruque, A
1993-12-01
The results presented in this paper are drawn from a study of the acceptability of the weaning food, ARGC. The study aimed to investigate attitudes and practices surrounding weaning and the dietary management of diarrhoea. One hundred and twenty mothers of children aged between six months and 24 months suffering from mild diarrhoea and admitted to the ICDDR,B treatment centre were randomly selected. Diarrhoea was attributed by mothers to a number of causes; most common were contaminated food and breastmilk. Breastmilk was understood to have been spoiled either by the mother's diet or mystical forces termed batash. Batash was also suspected to directly making children sick in some instances. Thirty-six per cent of mothers attempted to manage diarrhoea at home by withholding normal foods from their children's diets and others modified their own diets. Less than a quarter of the children were normally fed vegetables, dal (lentils) or small fish. It appeared that fish was rarely given to young children and was regarded with some ambivalence and considered potentially attractive as a vehicle for malign forces that might attack young children and their mothers and cause illness. People were unwilling to feed their children fish and other items of the normal family diet because of notions about the digestive system and the concept of "digestive power" and the idea that young children did not have the digestive power to digest certain foods. It was suggested that early weaning might lead to poor and abnormal growth and development.
Automatic classification of endoscopic images for premalignant conditions of the esophagus
NASA Astrophysics Data System (ADS)
Boschetto, Davide; Gambaretto, Gloria; Grisan, Enrico
2016-03-01
Barrett's esophagus (BE) is a precancerous complication of gastroesophageal reflux disease in which normal stratified squamous epithelium lining the esophagus is replaced by intestinal metaplastic columnar epithelium. Repeated endoscopies and multiple biopsies are often necessary to establish the presence of intestinal metaplasia. Narrow Band Imaging (NBI) is an imaging technique commonly used with endoscopies that enhances the contrast of vascular pattern on the mucosa. We present a computer-based method for the automatic normal/metaplastic classification of endoscopic NBI images. Superpixel segmentation is used to identify and cluster pixels belonging to uniform regions. From each uniform clustered region of pixels, eight features maximizing differences among normal and metaplastic epithelium are extracted for the classification step. For each superpixel, the three mean intensities of each color channel are firstly selected as features. Three added features are the mean intensities for each superpixel after separately applying to the red-channel image three different morphological filters (top-hat filtering, entropy filtering and range filtering). The last two features require the computation of the Grey-Level Co-Occurrence Matrix (GLCM), and are reflective of the contrast and the homogeneity of each superpixel. The classification step is performed using an ensemble of 50 classification trees, with a 10-fold cross-validation scheme by training the classifier at each step on a random 70% of the images and testing on the remaining 30% of the dataset. Sensitivity and Specificity are respectively of 79.2% and 87.3%, with an overall accuracy of 83.9%.
Azimi, Maryam; Jouybari, Leila; Moghadam, Shahram; Ghaemi, Ezatolah; Behnampoor, Naser; Sanagoo, Akram; Hesam, Moslem
2016-01-01
Background: The functions and use of mouthwashes are variable depending on their type. Oral care in patients with endotracheal tubes is important to prevent side effects such as pneumonia. The aim of this study was to determine the antimicrobial effects of chlorhexidine, drop of Matrica mouthwash (chamomile extract), and normal saline on hospitalized patients with endotracheal tube in an intensive care unit (ICU). Materials and Methods: In this clinical trial, 39 patients admitted to the ICU were selected by convenience sampling, were matched based on age and sex, and randomly assigned to three groups (chlorhexidine, Matrica, saline). Mouth washing was performed every 8 to 48 hours. The samples were taken at time zero (before the intervention) and 48 hours after the intervention for bacterial culture. Antibacterial activity of each mouthwash on microorganisms was measured based on the growth of Staphylococcus aureus, Pneumococcal, Enterococcus, Pseudomonas, and Escherichia coli. The obtained data were then analyzed using Chi-square and Fisher's exact tests with the Statistical Package for the Social Sciences Package version 18. Results: Chlorhexidine mouthwash was more effective in preventing colonization of bacteria in the mouth (point probability = 0.06) in comparison with chamomile and saline mouthwashes. Nevertheless, none of the tested mouthwashes were able to remove pathogens, including Staphylococcus aureus, Pseudomonas, Klebsiella, and Acinetobacter. Conclusions: 0.2% chlorhexidine mouthwash has a significant effect on the bacterial colonization rate in comparison with Matrica and normal saline mouthwashes in ICU hospitalized patients with endotracheal tube. PMID:27904627
Ostovaneh, M R; Saeidi, B; Hajifathalian, K; Farrokhi-Khajeh-Pasha, Y; Fotouhi, A; Mirbagheri, S S; Emami, H; Barzin, G; Mirbagheri, S A
2014-05-01
Patients with heartburn but without esophageal erosion respond less well to proton pump inhibitors (PPIs). There is a growing body of evidence implicating the role of psychological comorbidities in producing reflux symptoms. Pain modulators improve symptoms in patients with other functional gastrointestinal disorders. We aimed to compare the efficacy of fluoxetine with omeprazole and placebo to achieve symptomatic relief in patients with heartburn and normal endoscopy who failed once daily PPIs. Endoscopy-negative patients with heartburn who failed once daily PPIs were randomly allocated to receive 6 weeks treatment of fluoxetine, omeprazole, or placebo. Random allocation was stratified according to ambulatory pH monitoring study. Percentage of heartburn-free days and symptom severity was assessed. Sixty patients with abnormal and 84 patients with normal pH test were randomized. Subjects receiving fluoxetine experienced more improvement in percentage of heartburn-free days (median 35.7, IQR 21.4-57.1) than those on omeprazole (median 7.14, IQR 0-50, p < 0.001) or placebo (median 7.14, IQR 0-33.6, p < 0.001). In normal pH subgroup, fluoxetine was superior to both omeprazole and placebo regarding percentage of heartburn-free days (median improvement, 57.1, IQR 35.7-57.1 vs 13.9, IQR, 0-45.6 and 7.14, 0-23.8, respectively, p < 0.001), but no significant difference was observed between medications in abnormal pH subgroup. Fluoxetine was superior to omeprazole for improving the symptoms of patients with heartburn and normal endoscopy who failed once daily PPIs. The superiority of fluoxetine was mostly attributed to those with normal esophageal pH rather than those with abnormal pH (ClinicalTrials.gov, number NCT01269788). © 2014 John Wiley & Sons Ltd.
Somatic hypermutation and antigen-driven selection of B cells are altered in autoimmune diseases.
Zuckerman, Neta S; Hazanov, Helena; Barak, Michal; Edelman, Hanna; Hess, Shira; Shcolnik, Hadas; Dunn-Walters, Deborah; Mehr, Ramit
2010-12-01
B cells have been found to play a critical role in the pathogenesis of several autoimmune (AI) diseases. A common feature amongst many AI diseases is the formation of ectopic germinal centers (GC) within the afflicted tissue or organ, in which activated B cells expand and undergo somatic hypermutation (SHM) and antigen-driven selection on their immunoglobulin variable region (IgV) genes. However, it is not yet clear whether these processes occurring in ectopic GCs are identical to those in normal GCs. The analysis of IgV mutations has aided in revealing many aspects concerning B cell expansion, mutation and selection in GC reactions. We have applied several mutation analysis methods, based on lineage tree construction, to a large set of data, containing IgV productive and non-productive heavy and light chain sequences from several different tissues, to examine three of the most profoundly studied AI diseases - Rheumatoid Arthritis (RA), Multiple Sclerosis (MS) and Sjögren's Syndrome (SS). We have found that RA and MS sequences exhibited normal mutation spectra and targeting motifs, but a stricter selection compared to normal controls, which was more apparent in RA. SS sequence analysis results deviated from normal controls in both mutation spectra and indications of selection, also showing differences between light and heavy chain IgV and between different tissues. The differences revealed between AI diseases and normal control mutation patterns may result from the different microenvironmental influences to which ectopic GCs are exposed, relative to those in normal secondary lymphoid tissues. Copyright © 2010 Elsevier Ltd. All rights reserved.
The Effects of Social Capital Levels in Elementary Schools on Organizational Information Sharing
ERIC Educational Resources Information Center
Ekinci, Abdurrahman
2012-01-01
This study aims to assess the effects of social capital levels at elementary schools on organizational information sharing as reported by teachers. Participants were 267 teachers selected randomly from 16 elementary schools; schools also selected randomly among 42 elementary schools located in the city center of Batman. The data were analyzed by…
ERIC Educational Resources Information Center
Rafferty, Karen; Watson, Patrice; Lappe, Joan M.
2011-01-01
Objective: To assess the impact of calcium-fortified food and dairy food on selected nutrient intakes in the diets of adolescent girls. Design: Randomized controlled trial, secondary analysis. Setting and Participants: Adolescent girls (n = 149) from a midwestern metropolitan area participated in randomized controlled trials of bone physiology…
ERIC Educational Resources Information Center
Thomas, Henry B.; Kaplan, E. Joseph
A national survey was conducted of randomly selected chief student personnel officers as listed in the 1979 "Education Directory of Colleges and Universities." The survey addressed specific institutional demographics, policy-making authority, reporting structure, and areas of responsibility of the administrators. Over 93 percent of the respondents…
Nonmanufacturing Businesses. U.S. Metric Study Interim Report.
ERIC Educational Resources Information Center
Cornog, June R.; Bunten, Elaine D.
In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…
ERIC Educational Resources Information Center
Juhasz, Stephen; And Others
Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…
Molecular selection in a unified evolutionary sequence
NASA Technical Reports Server (NTRS)
Fox, S. W.
1986-01-01
With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
ERIC Educational Resources Information Center
Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin
2017-01-01
The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…
ERIC Educational Resources Information Center
Martinez, John; Fraker, Thomas; Manno, Michelle; Baird, Peter; Mamun, Arif; O'Day, Bonnie; Rangarajan, Anu; Wittenburg, David
2010-01-01
This report focuses on the seven original Youth Transition Demonstration (YTD) projects selected for funding in 2003. Three of the original seven projects were selected for a national random assignment evaluation in 2005; however, this report only focuses on program operations prior to joining the random assignment evaluation for the three…
Collinson, Paul O; Heung, Yen Ming; Gaze, David; Boa, Frances; Senior, Roxy; Christenson, Robert; Apple, Fred S
2012-01-01
We sought to determine the effect of patient selection on the 99th reference percentile of 2 sensitive and 1 high-sensitivity (hs) cardiac troponin assays in a well-defined reference population. Individuals>45 years old were randomly selected from 7 representative local community practices. Detailed information regarding the participants was collected via questionnaires. The healthy reference population was defined as individuals who had no history of vascular disease, hypertension, or heavy alcohol intake; were not receiving cardiac medication; and had blood pressure<140/90 mmHg, fasting blood glucose<110 mg/dL (approximately 6 mmol/L), estimated creatinine clearance>60 mL·min(-1)·(1.73 m2)(-1), and normal cardiac function according to results of echocardiography. Samples were stored at -70 °C until analysis for cardiac troponin I (cTnI) and cardiac troponin T (cTnT) and N-terminal pro-B-type natriuretic peptide. Application of progressively more stringent population selection strategies to the initial baseline population of 545 participants until the only individuals who remained were completely healthy according to the study criteria reduced the number of outliers seen and led to a progressive decrease in the 99th-percentile value obtained for the Roche hs-cTnT assay and the sensitive Beckman cTnI assay but not for the sensitive Siemens Ultra cTnI assay. Furthermore, a sex difference found in the baseline population for the hs-cTnT (P=0.0018) and Beckman cTnI assays (P<0.0001) progressively decreased with more stringent population selection criteria. The reference population selection strategy significantly influenced the 99th percentile reference values determined for troponin assays and the observed sex differences in troponin concentrations.
Dai, Huanping; Micheyl, Christophe
2010-01-01
A major concern when designing a psychophysical experiment is that participants may use another stimulus feature (“cue”) than that intended by the experimenter. One way to avoid this involves applying random variations to the corresponding feature across stimulus presentations, to make the “unwanted” cue unreliable. An important question facing experimenters who use this randomization (“roving”) technique is: How large should the randomization range be to ensure that participants cannot achieve a certain proportion correct (PC) by using the unwanted cue, while at the same time avoiding unnecessary interference of the randomization with task performance? Previous publications have provided formulas for the selection of adequate randomization ranges in yes-no and multiple-alternative, forced-choice tasks. In this article, we provide figures and tables, which can be used to select randomization ranges that are better suited to experiments involving a same-different, dual-pair, or oddity task. PMID:20139466
Yan, Xing-Ke; Dong, Li-Li; Liu, An-Guo; Wang, Jun-Yan; Ma, Chong-Bing; Zhu, Tian-Tian
2013-08-01
To explore electrophysiology mechanism of acupuncture for treatment and prevention of visual deprivation effect. Eighteen healthy 15-day Evans rats were randomly divided into a normal group, a model group and an acupuncture group, 6 rats in each one. Deprivation amblyopia model was established by monocular eyelid suture in the model group and acupuncture group. Acupuncture was applied at "Jingming" (BL 1), "Chengqi" (ST 1), "Qiuhou" (EX-HN 7) and "Cuanzhu" (BL 2) in the acupuncture group. The bilateral acupoints were selected alternately, one side for a day, and totally 14 days were required. The effect of acupuncture on visual evoked potential in different spatial frequencies was observed. Under three different kinds of spatial frequencies of 2 X 2, 4 X 4 and 8 X 8, compared with normal group, there was obvious visual deprivation effect in the model group where P1 peak latency was delayed (P<0.01) while N1 -P1 amplitude value was decreased (P<0.01). Compared with model group, P1 peak latency was obviously ahead of time (P<0.01) while N1-P1 amplitude value was increased (P<0.01) in the acupuncture group, there was no statistical significance compared with normal group (P>0.05). Under spatial frequency of 4 X 4, N1-P1 amplitude value was maximum in the normal group and acupuncture group. With this spatial frequency the rat's eye had best resolving ability, indicating it could be the best spatial frequency for rat visual system. The visual system has obvious electrophysiology plasticity in sensitive period. Acupuncture treatment could adjust visual deprivation-induced suppression and slow of visual response in order to antagonism deprivation effect.
Alturki, Hmidan A; Brookes, Denise Sk; Davies, Peter Sw
2018-04-06
To provide an in-depth analysis of the relationship between obesity and fast-food consumption by comparing urban obese and normal-weight Saudi Arabian children. A multicentre cross-sectional study was conducted from December 2015 to March 2016. Participants were divided into two groups (normal weight and obese) and further stratified by sex. Groups were randomly selected using a multistage stratified cluster-sampling technique. A self-paced questionnaire was used to collect data relating to food consumption. Weight height and waist circumference were measured and bioelectrical impedance analysis was performed in all children. Capital of Saudi Arabia, Riyadh. Children aged 9·00-11·99 years (n 1023). Compared with normal-weight groups, intake frequency of fast food/week was higher among the obese groups (P<0·001), irrespective of fast-food consumption outside (P<0·001) or inside (P<0·001) the home; and larger portion sizes were preferred in obese groups (P<0·001). Families eating fast-food meals together was a protective factor against obesity (OR; 95 % CI: 2·67; 1·44, 4·96, P<0·001), with similar results for families ordering from a 'healthy meals menu' for their children (1·90; 1·24, 2·90, P=0·002). Taste of fast foods (P=0·021), child-friendly menu (P=0·020) and meal cost (P<0·001) were identified as main reasons why parents took their children to fast-food restaurants; these data were replicated for parents with obese boys, but not girls. Development of effective interventions to reduce fast-food consumption in Saudi Arabian schoolchildren requires greater research-based evidence of fast-food consumption habits and practices associated with increased childhood obesity.
Devanarayana, Niranga Manjuri; Rajindrajith, Shaman
2011-05-01
Bowel habits vary depending on food consumption and genetic factors. The knowledge regarding this physiological phenomenon is limited. Thorough understanding of normal bowel habits is essential for correct diagnosis of defecation disorders. This study evaluated the normal bowel habits of Sri Lankan children. Children ages 10 to 16 years were randomly selected from 5 schools in 4 districts. Those without defecation disorders were recruited. Details regarding their bowel habits during previous 2 months were collected using a validated, self-administered questionnaire. A total of 2273 children were enrolled (mean age 13.2 years, SD 1.7 years, boys 49.7%). Of them, 1748 (76.9%) opened bowels once daily, whereas 149 (6.6%) and 11 (0.5%) had <3/week and >3/day defecations, respectively. Stool consistency was normal in 1997 (87.9%), hard in 86 (3.8%), and changing consistency in 163 (7.1%). Straining was present in 639 (28.1%), painful defecation in 241 (10.6%), and bleeding in 49 (2.2%). One hundred six (4.7%) children reported stool withholding. Bulky stool was present in 158 (7.0%). Straining, bulky stools, and withholding posture were more common in boys, whereas painful defecation and bleeding were reported more often in girls (P<0.05). Defecation frequency was lower in those from a poor socioeconomic background and war-affected areas (P < 0.05). Bowel frequency < 3/week, bulky stools, painful defecation, straining, and withholding posture were more common in those exposed to stressful life events (P < 0.05). The present study provides data on normal bowel habits of Sri Lankan schoolchildren and provides a firm platform to evaluate defecation disorders in them.
Qin, Feng-Zhen; Li, Sheng-Li; Wen, Hua-Xuan; Ouyang, Yu-Rong; Zheng, Qiong; Bi, Jing-Ru
2014-06-01
To establish the normal reference ranges of transabdominal ultrasound measurements of the posterior fossa structure in fetuses at 11 to 13⁺⁶ gestational weeks and explore their clinical value in screening open spina bifida (OSB). Between January, 2013 and September, 541 randomly selected normal fetuses underwent nuchal translucency at the gestational age 11 to 13⁺⁶ weeks. The parameters of the posterior fossa were measured in mid-sagittal view of the fetal face and the axial view of the transverse cerebellum insonated through the anterior fontanel by transabdominal ultrasound to establish the normal reference ranges. The measurements were obtained from 3 fetuses with OSB for comparison with the reference ranges. In normal fetuses, the parameters of the posterior fossa measured in the two views showed no significant differences (P>0.05). Two high echogenic lines were observed in normal fetuses, as compared with one in fetuses with OSB representing the posterior border of the brain stem and the anterior border of the fourth ventricle. The line between the posterior border of the fourth ventricle and the anterior border of the cisterna magna was not displayed in fetuses with OSB. The anteroposterior diameters of the brain stem, the fourth ventricle, and cisterna magna all increased in positive correlation with the crown-lump length in normal fetuses. In the 3 OSB fetuses, the anteroposterior diameter of the brain stem exceeded the 95th percentile and the anteroposterior diameter of fourth ventrical-cisterner magena was below the 5th percentile of the reference range for CRL; the brain stem to fourth ventrical-cisterner magena anteroposterior diameter ratio was increased to above 1. The established normal reference ranges of the parameters of fetal posterior fossa may provide assistance in early OSB detection. The absence of the posterior border of the fourth ventricle and the anterior border of the cisterna magna and a brainstem to fourth ventrical-cisterner magena anteroposterior diameter ratio greater than 1 can be indicative of OSB at 11 to 13⁺⁶ gestational weeks.
Reduced auditory efferent activity in childhood selective mutism.
Bar-Haim, Yair; Henkin, Yael; Ari-Even-Roth, Daphne; Tetin-Schneider, Simona; Hildesheimer, Minka; Muchnik, Chava
2004-06-01
Selective mutism is a psychiatric disorder of childhood characterized by consistent inability to speak in specific situations despite the ability to speak normally in others. The objective of this study was to test whether reduced auditory efferent activity, which may have direct bearings on speaking behavior, is compromised in selectively mute children. Participants were 16 children with selective mutism and 16 normally developing control children matched for age and gender. All children were tested for pure-tone audiometry, speech reception thresholds, speech discrimination, middle-ear acoustic reflex thresholds and decay function, transient evoked otoacoustic emission, suppression of transient evoked otoacoustic emission, and auditory brainstem response. Compared with control children, selectively mute children displayed specific deficiencies in auditory efferent activity. These aberrations in efferent activity appear along with normal pure-tone and speech audiometry and normal brainstem transmission as indicated by auditory brainstem response latencies. The diminished auditory efferent activity detected in some children with SM may result in desensitization of their auditory pathways by self-vocalization and in reduced control of masking and distortion of incoming speech sounds. These children may gradually learn to restrict vocalization to the minimal amount possible in contexts that require complex auditory processing.
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-01-01
Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-06-01
Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.
Moore, Simon C; Alam, M Fasihul; Heikkinen, Marjukka; Hood, Kerenza; Huang, Chao; Moore, Laurence; Murphy, Simon; Playle, Rebecca; Shepherd, Jonathan; Shovelton, Claire; Sivarajasingam, Vaseekaran; Williams, Anne
2017-11-01
Premises licensed for the sale and consumption of alcohol can contribute to levels of assault-related injury through poor operational practices that, if addressed, could reduce violence. We tested the real-world effectiveness of an intervention designed to change premises operation, whether any intervention effect changed over time, and the effect of intervention dose. A parallel randomized controlled trial with the unit of allocation and outcomes measured at the level of individual premises. All premises (public houses, nightclubs or hotels with a public bar) in Wales, UK. A randomly selected subsample (n = 600) of eligible premises (that had one or more violent incidents recorded in police-recorded crime data; n = 837) were randomized into control and intervention groups. Intervention premises were audited by Environmental Health Practitioners who identified risks for violence and provided feedback by varying dose (informal, through written advice, follow-up visits) on how risks could be addressed. Control premises received usual practice. Police data were used to derive a binary variable describing whether, on each day premises were open, one or more violent incidents were evident over a 455-day period following randomization. Due to premises being unavailable at the time of intervention delivery 208 received the intervention and 245 were subject to usual practice in an intention-to-treat analysis. The intervention was associated with an increase in police recorded violence compared to normal practice (hazard ratio = 1.34, 95% confidence interval = 1.20-1.51). Exploratory analyses suggested that reduced violence was associated with greater intervention dose (follow-up visits). An Environmental Health Practitioner-led intervention in premises licensed for the sale and on-site consumption of alcohol resulted in an increase in police recorded violence. © 2017 The Authors. Addiction published by John Wiley & Sons Ltd on behalf of Society for the Study of Addiction.
Alam, M. Fasihul; Heikkinen, Marjukka; Hood, Kerenza; Huang, Chao; Moore, Laurence; Murphy, Simon; Playle, Rebecca; Shepherd, Jonathan; Shovelton, Claire; Sivarajasingam, Vaseekaran; Williams, Anne
2017-01-01
Abstract Background and Aims Premises licensed for the sale and consumption of alcohol can contribute to levels of assault‐related injury through poor operational practices that, if addressed, could reduce violence. We tested the real‐world effectiveness of an intervention designed to change premises operation, whether any intervention effect changed over time, and the effect of intervention dose. Design A parallel randomized controlled trial with the unit of allocation and outcomes measured at the level of individual premises. Setting All premises (public houses, nightclubs or hotels with a public bar) in Wales, UK. Participants A randomly selected subsample (n = 600) of eligible premises (that had one or more violent incidents recorded in police‐recorded crime data; n = 837) were randomized into control and intervention groups. Intervention and comparator Intervention premises were audited by Environmental Health Practitioners who identified risks for violence and provided feedback by varying dose (informal, through written advice, follow‐up visits) on how risks could be addressed. Control premises received usual practice. Measurements Police data were used to derive a binary variable describing whether, on each day premises were open, one or more violent incidents were evident over a 455‐day period following randomization. Findings Due to premises being unavailable at the time of intervention delivery 208 received the intervention and 245 were subject to usual practice in an intention‐to‐treat analysis. The intervention was associated with an increase in police recorded violence compared to normal practice (hazard ratio = 1.34, 95% confidence interval = 1.20–1.51). Exploratory analyses suggested that reduced violence was associated with greater intervention dose (follow‐up visits). Conclusion An Environmental Health Practitioner‐led intervention in premises licensed for the sale and on‐site consumption of alcohol resulted in an increase in police recorded violence. PMID:28543914
Multidimensional Normalization to Minimize Plate Effects of Suspension Bead Array Data.
Hong, Mun-Gwan; Lee, Woojoo; Nilsson, Peter; Pawitan, Yudi; Schwenk, Jochen M
2016-10-07
Enhanced by the growing number of biobanks, biomarker studies can now be performed with reasonable statistical power by using large sets of samples. Antibody-based proteomics by means of suspension bead arrays offers one attractive approach to analyze serum, plasma, or CSF samples for such studies in microtiter plates. To expand measurements beyond single batches, with either 96 or 384 samples per plate, suitable normalization methods are required to minimize the variation between plates. Here we propose two normalization approaches utilizing MA coordinates. The multidimensional MA (multi-MA) and MA-loess both consider all samples of a microtiter plate per suspension bead array assay and thus do not require any external reference samples. We demonstrate the performance of the two MA normalization methods with data obtained from the analysis of 384 samples including both serum and plasma. Samples were randomized across 96-well sample plates, processed, and analyzed in assay plates, respectively. Using principal component analysis (PCA), we could show that plate-wise clusters found in the first two components were eliminated by multi-MA normalization as compared with other normalization methods. Furthermore, we studied the correlation profiles between random pairs of antibodies and found that both MA normalization methods substantially reduced the inflated correlation introduced by plate effects. Normalization approaches using multi-MA and MA-loess minimized batch effects arising from the analysis of several assay plates with antibody suspension bead arrays. In a simulated biomarker study, multi-MA restored associations lost due to plate effects. Our normalization approaches, which are available as R package MDimNormn, could also be useful in studies using other types of high-throughput assay data.
Moran, C; Lee, C
2014-05-01
Examine women's perceptions of what is 'normal' and 'desirable' in female genital appearance. Experiment with random allocation across three conditions. Community. A total of 97 women aged 18-30 years. Women were randomly assigned to view a series of images of (1) surgically modified vulvas or (2) nonmodified vulvas, or (3) no images. They then viewed and rated ten target images of surgically modified vulvas and ten of unmodified vulvas. Women used a four-point Likert scale ('strongly agree' to 'strongly disagree'), to rate each target image for 'looks normal' and 'represents society's ideal'. For each woman, we created two summary scores that represented the extent to which she rated the unmodified vulvas as more 'normal' and more 'society's ideal' than the modified vulvas. For ratings of 'normality,' there was a significant effect for condition (F2,94 = 2.75 P = 0.007, radj2 = 0.082): women who had first viewed the modified images rated the modified target vulvas as more normal than the nonmodified vulvas, significantly different from the control group, who rated them as less normal. For ratings of 'society's ideal', there was again a significant effect for condition (F2,92 = 7.72, P < 0.001, radj2 = 0.125); all three groups rated modified target vulvas as more like society's ideal than the nonmodified target vulvas, with the effect significantly strongest for the women who had viewed the modified images. Exposure to images of modified vulvas may change women's perceptions of what is normal and desirable. This may explain why some healthy women seek labiaplasty. © 2013 Royal College of Obstetricians and Gynaecologists.
Estimation of Renyi exponents in random cascades
Troutman, Brent M.; Vecchia, Aldo V.
1999-01-01
We consider statistical estimation of the Re??nyi exponent ??(h), which characterizes the scaling behaviour of a singular measure ?? defined on a subset of Rd. The Re??nyi exponent is defined to be lim?????0 [{log M??(h)}/(-log ??)], assuming that this limit exists, where M??(h) = ??i??h(??i) and, for ??>0, {??i} are the cubes of a ??-coordinate mesh that intersect the support of ??. In particular, we demonstrate asymptotic normality of the least-squares estimator of ??(h) when the measure ?? is generated by a particular class of multiplicative random cascades, a result which allows construction of interval estimates and application of hypothesis tests for this scaling exponent. Simulation results illustrating this asymptotic normality are presented. ?? 1999 ISI/BS.
ERIC Educational Resources Information Center
Randell, Jordan; Searle, Rob; Reed, Phil
2012-01-01
Schedules of reinforcement typically produce reliable patterns of behaviour, and one factor that can cause deviations from these normally reliable patterns is schizotypy. Low scorers on the unusual experiences subscale of the Oxford-Liverpool Inventory of Feelings and Experiences performed as expected on a yoked random-ratio (RR), random-interval…
ERIC Educational Resources Information Center
van Ginkel, Joost R.; van der Ark, L. Andries; Sijtsma, Klaas
2007-01-01
The performance of five simple multiple imputation methods for dealing with missing data were compared. In addition, random imputation and multivariate normal imputation were used as lower and upper benchmark, respectively. Test data were simulated and item scores were deleted such that they were either missing completely at random, missing at…
How to determine an optimal threshold to classify real-time crash-prone traffic conditions?
Yang, Kui; Yu, Rongjie; Wang, Xuesong; Quddus, Mohammed; Xue, Lifang
2018-08-01
One of the proactive approaches in reducing traffic crashes is to identify hazardous traffic conditions that may lead to a traffic crash, known as real-time crash prediction. Threshold selection is one of the essential steps of real-time crash prediction. And it provides the cut-off point for the posterior probability which is used to separate potential crash warnings against normal traffic conditions, after the outcome of the probability of a crash occurring given a specific traffic condition on the basis of crash risk evaluation models. There is however a dearth of research that focuses on how to effectively determine an optimal threshold. And only when discussing the predictive performance of the models, a few studies utilized subjective methods to choose the threshold. The subjective methods cannot automatically identify the optimal thresholds in different traffic and weather conditions in real application. Thus, a theoretical method to select the threshold value is necessary for the sake of avoiding subjective judgments. The purpose of this study is to provide a theoretical method for automatically identifying the optimal threshold. Considering the random effects of variable factors across all roadway segments, the mixed logit model was utilized to develop the crash risk evaluation model and further evaluate the crash risk. Cross-entropy, between-class variance and other theories were employed and investigated to empirically identify the optimal threshold. And K-fold cross-validation was used to validate the performance of proposed threshold selection methods with the help of several evaluation criteria. The results indicate that (i) the mixed logit model can obtain a good performance; (ii) the classification performance of the threshold selected by the minimum cross-entropy method outperforms the other methods according to the criteria. This method can be well-behaved to automatically identify thresholds in crash prediction, by minimizing the cross entropy between the original dataset with continuous probability of a crash occurring and the binarized dataset after using the thresholds to separate potential crash warnings against normal traffic conditions. Copyright © 2018 Elsevier Ltd. All rights reserved.
Tong, Fang; Fu, Tong
2013-01-01
Objective To evaluate the differences in fluid intelligence tests between normal children and children with learning difficulties in China. Method PubMed, MD Consult, and other Chinese Journal Database were searched from their establishment to November 2012. After finding comparative studies of Raven measurements of normal children and children with learning difficulties, full Intelligent Quotation (FIQ) values and the original values of the sub-measurement were extracted. The corresponding effect model was selected based on the results of heterogeneity and parallel sub-group analysis was performed. Results Twelve documents were included in the meta-analysis, and the studies were all performed in mainland of China. Among these, two studies were performed at child health clinics, the other ten sites were schools and control children were schoolmates or classmates. FIQ was evaluated using a random effects model. WMD was −13.18 (95% CI: −16.50–−9.85). Children with learning difficulties showed significantly lower FIQ scores than controls (P<0.00001); Type of learning difficulty and gender differences were evaluated using a fixed-effects model (I2 = 0%). The sites and purposes of the studies evaluated here were taken into account, but the reasons of heterogeneity could not be eliminated; The sum IQ of all the subgroups showed considerable heterogeneity (I2 = 76.5%). The sub-measurement score of document A showed moderate heterogeneity among all documents, and AB, B, and E showed considerable heterogeneity, which was used in a random effect model. Individuals with learning difficulties showed heterogeneity as well. There was a moderate delay in the first three items (−0.5 to −0.9), and a much more pronounced delay in the latter three items (−1.4 to −1.6). Conclusion In the Chinese mainland, the level of fluid intelligence of children with learning difficulties was lower than that of normal children. Delayed development in sub-items of C, D, and E was more obvious. PMID:24236016
Landsat Based Woody Vegetation Loss Detection in Queensland, Australia Using the Google Earth Engine
NASA Astrophysics Data System (ADS)
Johansen, K.; Phinn, S. R.; Taylor, M.
2014-12-01
Land clearing detection and woody Foliage Projective Cover (FPC) monitoring at the state and national level in Australia has mainly been undertaken by state governments and the Terrestrial Ecosystem Research Network (TERN) because of the considerable expense, expertise, sustained duration of activities and staffing levels needed. Only recently have services become available, providing low budget, generalized access to change detection tools suited to this task. The objective of this research was to examine if a globally available service, Google Earth Engine Beta, could be used to predict woody vegetation loss with accuracies approaching the methods used by TERN and the government of the state of Queensland, Australia. Two change detection approaches were investigated using Landsat Thematic Mapper time series and the Google Earth Engine Application Programming Interface: (1) CART and Random Forest classifiers; and (2) a normalized time series of Foliage Projective Cover (FPC) and NDVI combined with a spectral index. The CART and Random Forest classifiers produced high user's and producer's mapping accuracies of clearing (77-92% and 54-77%, respectively) when detecting change within epochs for which training data were available, but extrapolation to epochs without training data reduced the mapping accuracies. The use of FPC and NDVI time series provided a more robust approach for calculation of a clearing probability, as it did not rely on training data but instead on the difference of the normalized FPC / NDVI mean and standard deviation of a single year at the change point in relation to the remaining time series. However, the FPC and NDVI time series approach represented a trade-off between user's and producer's accuracies. Both change detection approaches explored in this research were sensitive to ephemeral greening and drying of the landscape. However, the developed normalized FPC and NDVI time series approach can be tuned to provide automated alerts for large woody vegetation clearing events by selecting suitable thresholds to identify very likely clearing. This research provides a comprehensive foundation to build further capacity to use globally accessible, free, online image datasets and processing tools to accurately detect woody vegetation clearing in an automated and rapid manner.
Liu, Wei; Schild, Steven E.; Chang, Joe Y.; Liao, Zhongxing; Chang, Yu-Hui; Wen, Zhifei; Shen, Jiajian; Stoker, Joshua B.; Ding, Xiaoning; Hu, Yanle; Sahoo, Narayan; Herman, Michael G.; Vargas, Carlos; Keole, Sameer; Wong, William; Bues, Martin
2015-01-01
Background To compare the impact of uncertainties and interplay effect on 3D and 4D robustly optimized intensity-modulated proton therapy (IMPT) plans for lung cancer in an exploratory methodology study. Methods IMPT plans were created for 11 non-randomly selected non-small-cell lung cancer (NSCLC) cases: 3D robustly optimized plans on average CTs with internal gross tumor volume density overridden to irradiate internal target volume, and 4D robustly optimized plans on 4D CTs to irradiate clinical target volume (CTV). Regular fractionation (66 Gy[RBE] in 33 fractions) were considered. In 4D optimization, the CTV of individual phases received non-uniform doses to achieve a uniform cumulative dose. The root-mean-square-dose volume histograms (RVH) measured the sensitivity of the dose to uncertainties, and the areas under the RVH curve (AUCs) were used to evaluate plan robustness. Dose evaluation software modeled time-dependent spot delivery to incorporate interplay effect with randomized starting phases of each field per fraction. Dose-volume histogram indices comparing CTV coverage, homogeneity, and normal tissue sparing were evaluated using Wilcoxon signed-rank test. Results 4D robust optimization plans led to smaller AUC for CTV (14.26 vs. 18.61 (p=0.001), better CTV coverage (Gy[RBE]) [D95% CTV: 60.6 vs 55.2 (p=0.001)], and better CTV homogeneity [D5%–D95% CTV: 10.3 vs 17.7 (p=0.002)] in the face of uncertainties. With interplay effect considered, 4D robust optimization produced plans with better target coverage [D95% CTV: 64.5 vs 63.8 (p=0.0068)], comparable target homogeneity, and comparable normal tissue protection. The benefits from 4D robust optimization were most obvious for the 2 typical stage III lung cancer patients. Conclusions Our exploratory methodology study showed that, compared to 3D robust optimization, 4D robust optimization produced significantly more robust and interplay-effect-resistant plans for targets with comparable dose distributions for normal tissues. A further study with a larger and more realistic patient population is warranted to generalize the conclusions. PMID:26725727
Hsu, Chi-Sen; Wang, Chung-Jing; Chang, Chien-Hsing; Tsai, Po-Chao; Chen, Hung-Wen; Su, Yi-Chun
2017-01-01
A randomized trial was conducted prospectively to evaluate the efficacy, related complications, and convalescence of emergency percutaneous nephrolithotomy compared to percutaneous nephrostomy for decompression of the collecting system in cases of sepsis associated with large uretero-pelvic junction stone impaction. The inclusion criteria included a WBC count of 10.000/mm3 or more and/or a temperature of 38°C or higher. Besides, all enrolled patients should maintain stable hemodynamic status and proper organ perfusions. A total of 113 patients with large, obstructive uretero-pelvic junction stones and clinical signs of sepsis completed the study protocol. Of those, 56 patients were placed in the emergency percutaneous nephrostomy group, while the other 57 patients were part of the percutaneous nephrolithotomy group. The primary end point was the time until normalization of white blood cells (WBC) at a count of 10.000/mm3 or less, and a temperature of 37.4°C or lower. The secondary end points included the comparison of analgesic consumption, length of stay, and related complications. Statistical analysis was performed using SPSS® version 14.0.1. The Mann-Whitney U test, chi-square test, and Fisher's exact test were used as appropriate. The length of hospital stays (in days) was 10.09±3.43 for the emergency percutaneous nephrostomy group and 8.18±2.72 for the percutaneous nephrolithotomy group. This set of data noted a significant difference between groups. There was no difference between groups in regard to white blood cell count (in mm3), time to normalization of white blood cell count (in days), body temperature (in ºC), time to normalization of body temperature (in days), C-reactive proteins (in mg/dL), time taken for C-reactive proteins to decrease over 25% (in days), procalcitonin (in ng/mL), or complication rates. This study confirms that emergency percutaneous nephrolithotomy may be as safe as early percutaneous nephrolithotomy in a selected low risk patients with sepsis-associated large, obstructive stone. Copyright® by the International Brazilian Journal of Urology.
Temple, Larissa K.F.; Litwin, Demetrius E.; McLeod, Robin S.
Objective To determine if any significant differences exist between laparoscopic appendectomy (LA) and open appendectomy (OA). Design A meta-analysis of randomized controlled trials (RCTs) comparing LA to OA. Data sources An extensive literature search was conducted for appropriate articles published between January 1990 and March 1997. Articles were initially retrieved through MEDLINE with MeSH terms “appendicitis” or “appendectomy” and “laparoscopy.” Additional methods included cross-referencing bibliographies of retrieved articles, hand searching abstracts from relevant meetings and consultation with a content expert. Study selection Only RCTs published in English in which patients had a preoperative diagnosis of acute appendicitis were included. Data extraction The outcomes of interest included operating time, hospital stay, readmission rates, return to normal activity and complications. The Cochrane Collaboration Review Manager 3.0 was used to calculate odds ratios (OR), weighted mean differences (WMD) and 95% confidence intervals (CI). The random-effects model was used for statistical analysis. Data synthesis Twelve trials met the inclusion criteria. Because there were insufficient data in some trials, operating time, hospitalization and return to work were assessed in only 8 trials. Mean operating time was significantly longer with LA (WMD 18.10 minutes, 95% CI 12.87 to 23.15 minutes). There were fewer wound infections in LA (OR 0.40, 95% CI 0.24 to 0.69), but no significant differences in intra-abdominal abscess rates (OR 1.94, 95% CI 0.68 to 5.58). There was no significant difference in the mean length of hospital stay (WMD −0.16 days, 95% CI −0.44 to 0.15 days) or readmission rates (OR 1.16, 95% CI 0.54 to 2.48). However, the return to normal activity was significantly earlier with LA (WMD −5.79 days, 95% CI −7.38 to −4.21 days). Sensitivity analyses did not affect the results. Conclusion This meta-analysis suggests that operating room time is significantly longer, hospital stay is unchanged but return to normal activities is significantly earlier with LA. PMID:10526524
Sagedal, Linda Reme; Haakstad, Lene Annette Hagen; Lohne-Seiler, Hilde
2017-01-01
Background Despite documented health benefits for mother and baby, physical activity (PA)-level tends to decline in pregnancy. Overweight/obese and physically inactive women are two selected groups at increased risk of pregnancy complications. Thus, efficient strategies to maintain or increase PA-level in pregnancy and the postpartum period, especially among these women, are warranted. This secondary analysis examined the effect of a prenatal lifestyle-intervention on PA-level in late pregnancy and the first year postpartum, with subanalysis on initially physically active versus inactive and normal-weight versus overweight/obese women. Method The Norwegian Fit for Delivery (NFFD) randomized controlled trial included healthy primiparous women with singleton pregnancies and body mass index (BMI) ≥19 kg/m2 assigned to an intervention group, n = 303 (twice weekly group-exercises and dietary counseling) or a control group, n = 303 (standard prenatal care). The International Physical Activity Questionnaire short-form was used to assess PA-levels at inclusion (mean gestational week (GW) 16), GW 36, and six and 12 months postpartum. Results At GW 36, a positive intervention-effect with a significant between-group difference in total PA-level compared to time of inclusion was found for the total group (530 MET-min/week, p = 0.001) and the subgroups of normal-weight (533 MET-min/week, p = 0.003) and initially active women (717 MET-min/week, p<0.001). Intervention-effect was dependent on exercise-adherence among overweight/obese and inactive women. Compared to time of inclusion, the intervention groups maintained total PA-level at GW 36, while total PA-level decreased in the control groups. The PA-levels increased postpartum, but with no significant differences between the randomization groups. Conclusion The NFFD prenatal combined lifestyle intervention had a significant effect on TPA-level in late pregnancy among women entering pregnancy normal-weight or physically active, thereby preventing the downward trend typically seen during pregnancy. Intervention-effect among overweight/obese and physically inactive women was, however, dependent on exercise-adherence. Long-term intervention-effect was not observed in the postpartum period. PMID:29176762
Sanda, Birgitte; Vistad, Ingvild; Sagedal, Linda Reme; Haakstad, Lene Annette Hagen; Lohne-Seiler, Hilde; Torstveit, Monica Klungland
2017-01-01
Despite documented health benefits for mother and baby, physical activity (PA)-level tends to decline in pregnancy. Overweight/obese and physically inactive women are two selected groups at increased risk of pregnancy complications. Thus, efficient strategies to maintain or increase PA-level in pregnancy and the postpartum period, especially among these women, are warranted. This secondary analysis examined the effect of a prenatal lifestyle-intervention on PA-level in late pregnancy and the first year postpartum, with subanalysis on initially physically active versus inactive and normal-weight versus overweight/obese women. The Norwegian Fit for Delivery (NFFD) randomized controlled trial included healthy primiparous women with singleton pregnancies and body mass index (BMI) ≥19 kg/m2 assigned to an intervention group, n = 303 (twice weekly group-exercises and dietary counseling) or a control group, n = 303 (standard prenatal care). The International Physical Activity Questionnaire short-form was used to assess PA-levels at inclusion (mean gestational week (GW) 16), GW 36, and six and 12 months postpartum. At GW 36, a positive intervention-effect with a significant between-group difference in total PA-level compared to time of inclusion was found for the total group (530 MET-min/week, p = 0.001) and the subgroups of normal-weight (533 MET-min/week, p = 0.003) and initially active women (717 MET-min/week, p<0.001). Intervention-effect was dependent on exercise-adherence among overweight/obese and inactive women. Compared to time of inclusion, the intervention groups maintained total PA-level at GW 36, while total PA-level decreased in the control groups. The PA-levels increased postpartum, but with no significant differences between the randomization groups. The NFFD prenatal combined lifestyle intervention had a significant effect on TPA-level in late pregnancy among women entering pregnancy normal-weight or physically active, thereby preventing the downward trend typically seen during pregnancy. Intervention-effect among overweight/obese and physically inactive women was, however, dependent on exercise-adherence. Long-term intervention-effect was not observed in the postpartum period.
Zhang, Hongyi; Ge, Lijuan; Chen, Hui; Jing, Cong; Shi, Zhihong
2009-07-01
The principle of the normalization of migration time and its application on the traditional Chinese medicine (TCM) analysis by capillary electrophoresis (CE) are presented. It is the core of the normalization of migration time that the fluctuation of apparent migration velocity for each component at different runs is attributed to the difference of electroosmotic flow velocity. To transform migration time (t) to normalized migration time, one peak or two peaks in the original electropherogram are selected as internal peak. The normalization of migration time is therefore classified into two types based on the number of selected internal peaks, one-peak and two-peak approaches. The migration times processed by one-peak normalization and by two-peak normalization are conducted by the following equations, respectively: (t'(i))(j) = 1/ [1/(t(i))(j) - [1/(t(istd))(j) - 1/(t(istd))1
Vertical or horizontal orientation of foot radiographs does not affect image interpretation
Ferran, Nicholas Antonio; Ball, Luke; Maffulli, Nicola
2012-01-01
Summary This study determined whether the orientation of dorsoplantar and oblique foot radiographs has an effect on radiograph interpretation. A test set of 50 consecutive foot radiographs were selected (25 with fractures, and 25 normal), and duplicated in the horizontal orientation. The images were randomly arranged, numbered 1 through 100, and analysed by six image interpreters. Vertical and horizontal area under the ROC curve, accuracy, sensitivity and specificity were calculated for each image interpreter. There was no significant difference in the area under the ROC curve, accuracy, sensitivity or specificity of image interpretation between images viewed in the vertical or horizontal orientation. While conventions for display of radiographs may help to improve the development of an efficient visual search strategy in trainees, and allow for standardisation of publication of radiographic images, variation from the convention in clinical practice does not appear to affect the sensitivity or specificity of image interpretation. PMID:23738310
Alkuraishy, Hayder M; Al-Gareeb, Ali I; Albuhadilly, Ali K
2014-01-01
Blood and plasma viscosity are the major factors affecting blood flow and normal circulation. Whole blood viscosity is mainly affected by plasma viscosity, red blood cell deformability/aggregation and hematocrit, and other physiological factors. Thirty patients (twenty males + ten females) with age range 50-65 years, normotensive with history of cerebrovascular disorders, were selected according to the American Heart Stroke Association. Blood viscosity and other rheological parameters were measured after two-day abstinence from any medications. Dual effects of vinpocetine and pyritinol exhibit significant effects on all hemorheological parameters (P < 0.05), especially on low shear whole blood viscosity (P < 0.01), but they produced insignificant effects on total serum protein and high shear whole blood viscosity (P > 0.05). Therefore, joint effects of vinpocetine and pyritinol improve blood and plasma viscosity in patients with cerebrovascular disorders.
Alkuraishy, Hayder M.; Al-Gareeb, Ali I.; Albuhadilly, Ali K.
2014-01-01
Blood and plasma viscosity are the major factors affecting blood flow and normal circulation. Whole blood viscosity is mainly affected by plasma viscosity, red blood cell deformability/aggregation and hematocrit, and other physiological factors. Thirty patients (twenty males + ten females) with age range 50–65 years, normotensive with history of cerebrovascular disorders, were selected according to the American Heart Stroke Association. Blood viscosity and other rheological parameters were measured after two-day abstinence from any medications. Dual effects of vinpocetine and pyritinol exhibit significant effects on all hemorheological parameters (P < 0.05), especially on low shear whole blood viscosity (P < 0.01), but they produced insignificant effects on total serum protein and high shear whole blood viscosity (P > 0.05). Therefore, joint effects of vinpocetine and pyritinol improve blood and plasma viscosity in patients with cerebrovascular disorders. PMID:25548768
Dynamic design of ecological monitoring networks for non-Gaussian spatio-temporal data
Wikle, C.K.; Royle, J. Andrew
2005-01-01
Many ecological processes exhibit spatial structure that changes over time in a coherent, dynamical fashion. This dynamical component is often ignored in the design of spatial monitoring networks. Furthermore, ecological variables related to processes such as habitat are often non-Gaussian (e.g. Poisson or log-normal). We demonstrate that a simulation-based design approach can be used in settings where the data distribution is from a spatio-temporal exponential family. The key random component in the conditional mean function from this distribution is then a spatio-temporal dynamic process. Given the computational burden of estimating the expected utility of various designs in this setting, we utilize an extended Kalman filter approximation to facilitate implementation. The approach is motivated by, and demonstrated on, the problem of selecting sampling locations to estimate July brood counts in the prairie pothole region of the U.S.
Richardson, David S; Westerdahl, Helena
2003-12-01
The Great reed warbler (GRW) and the Seychelles warbler (SW) are congeners with markedly different demographic histories. The GRW is a normal outbred bird species while the SW population remains isolated and inbred after undergoing a severe population bottleneck. We examined variation at Major Histocompatibility Complex (MHC) class I exon 3 using restriction fragment length polymorphism, denaturing gradient gel electrophoresis and DNA sequencing. Although genetic variation was higher in the GRW, considerable variation has been maintained in the SW. The ten exon 3 sequences found in the SW were as diverged from each other as were a random sub-sample of the 67 sequences from the GRW. There was evidence for balancing selection in both species, and the phylogenetic analysis showing that the exon 3 sequences did not separate according to species, was consistent with transspecies evolution of the MHC.
Lockitch, G; Pendray, M R; Godolphin, W J; Quigley, G
1985-07-01
One hundred and five infants of birth weight 2000 g or less who received peripherally administered parenteral nutrition for periods of three or more weeks, were randomly assigned to groups receiving different amounts of zinc and copper supplement. The blood concentrations of zinc, copper, retinol-binding protein, prealbumin, alkaline phosphatase and aspartate transaminase were followed weekly. Mean serum zinc, retinol-binding protein and prealbumin declined significantly over time while alkaline phosphatase rose. Only the group receiving the highest zinc supplement maintained a mean serum zinc concentration within the normal range at seven weeks. No difference in the protein or enzyme concentrations was found between the different zinc supplement groups. No difference was seen in serum copper or ceruloplasmin between copper dose groups although one intravenous supplement was double that of the other.
Wu, Hao; Ying, Minfeng; Hu, Xun
2016-06-28
While transformation of normal cells to cancer cells is accompanied with a switch from oxidative phosphorylation (OXPHOS) to aerobic glycolysis, it is interesting to ask if cancer cells can revert from Warburg effect to OXPHOS. Our previous works suggested that cancer cells reverted to OXPHOS, when they were exposed to lactic acidosis, a common factor in tumor environment. However, the conclusion cannot be drawn unless ATP output from glycolysis and OXPHOS is quantitatively determined. Here we quantitatively measured ATP generation from glycolysis and OXPHOS in 9 randomly selected cancer cell lines. Without lactic acidosis, glycolysis and OXPHOS generated 23.7% - 52.2 % and 47.8% - 76.3% of total ATP, respectively; with lactic acidosis (20 mM lactate with pH 6.7), glycolysis and OXPHOS provided 5.7% - 13.4% and 86.6% - 94.3% of total ATP. We concluded that cancer cells under lactic acidosis reverted from Warburg effect to OXPHOS phenotype.
Crowding with detection and coarse discrimination of simple visual features.
Põder, Endel
2008-04-24
Some recent studies have suggested that there are actually no crowding effects with detection and coarse discrimination of simple visual features. The present study tests the generality of this idea. A target Gabor patch, surrounded by either 2 or 6 flanker Gabors, was presented briefly at 4 deg eccentricity of the visual field. Each Gabor patch was oriented either vertically or horizontally (selected randomly). Observers' task was either to detect the presence of the target (presented with probability 0.5) or to identify the orientation of the target. The target-flanker distance was varied. Results were similar for the two tasks but different for 2 and 6 flankers. The idea that feature detection and coarse discrimination are immune to crowding may be valid for the two-flanker condition only. With six flankers, a normal crowding effect was observed. It is suggested that the complexity of the full pattern (target plus flankers) could explain the difference.
Custer, Christine M.; Custer, Thomas W.; Dummer, Paul; Goldberg, Diana R.; Franson, J. Christian
2018-01-01
Tree swallow (Tachycineta bicolor) eggs and nestlings were collected from 16 sites across the Great Lakes to quantify normal annual variation in total polychlorinated biphenyl (PCB) exposure and to validate the sample size choice in earlier work. A sample size of five eggs or five nestlings per site was adequate to quantify exposure to PCBs in tree swallows given the current exposure levels and variation. There was no difference in PCB exposure in two randomly selected sets of five eggs collected in the same year, but analyzed in different years. Additionally, there was only modest annual variation in exposure, with between 69% (nestlings) and 73% (eggs) of sites having no differences between years. There was a tendency, both statistically and qualitatively, for there to be less exposure in the second year compared to the first year.
Yang, Xuezhou; Quan, Xiaozhen; Lan, Yanli; Ye, Jinhai; Wei, Qipeng; Yin, Xiaofang; Fan, Fangfang; Xing, Hui
2017-10-01
To investigate the association between chemerin level in the first trimester of pregnancy and the risk of gestational diabetes mellitus. The blood samples of 212 women at 8-12 weeks of gestation were collected. After screening for gestational diabetes mellitus (GDM), 19 women with GDM and 20 women randomly selected from 144 women with normal glucose tolerance (NGT) were included in the study. Blood samples were collected from these women. Triglycerides, glucose, total cholesterol, and HDL cholesterol, LDL cholesterol, insulin and chemerin were measured. Gestational weight gain and body mass index was assessed. Serum levels of chemerin were significantly elevated during late gestation, and the risk of GDM was positively associated with maternal serum chemerin in the first trimester. Serum chemerin level during the first trimester of pregnancy has the potential to predict risk of GDM.
Enzyme Technology of Peroxidases: Immobilization, Chemical and Genetic Modification
NASA Astrophysics Data System (ADS)
Longoria, Adriana; Tinoco, Raunel; Torres, Eduardo
An overview of enzyme technology applied to peroxidases is made. Immobilization on organic, inorganic, and hybrid supports; chemical modification of amino acids and heme group; and genetic modification by site-directed and random mutagenesis are included. Different strategies that were carried out to improve peroxidase performance in terms of stability, selectivity, and catalytic activity are analyzed. Immobilization of peroxidases on inorganic and organic materials enhances the tolerance of peroxidases toward the conditions normally found in many industrial processes, such as the presence of an organic solvent and high temperature. In addition, it is shown that immobilization helps to increase the Total Turnover Number at levels high enough to justify the use of a peroxidase-based biocatalyst in a synthesis process. Chemical modification of peroxidases produces modified enzymes with higher thermostability and wider substrate variability. Finally, through mutagenesis approaches, it is possible to produce modified peroxidases capable of oxidizing nonnatural substrates with high catalytic activity and affinity.
Auditory function at 14 years of age of very-low-birthweight.
Davis, N M; Doyle, L W; Ford, G W; Keir, E; Michael, J; Rickards, A L; Kelly, E A; Callanan, C
2001-03-01
The aim of the study was to determine audiological function at 14 years of age of very-low-birthweight (VLBW < or = 1500 g) children compared with a cohort of normal birthweight (NBW > 2499 g) children. Participants were consecutive surviving preterm children of birthweight < 1000 g born between 1977 and 1982 (n=86) and of birthweight 1000 to 1500 g born between 1980 and 1982 (n=124) and randomly selected NBW children born between 1981 and 1982 (n=60). Audiometric tests included pure tone audiometry, tympanometry, stapedius muscle reflexes, and measures of central auditory processing. Psychometric tests included measures of IQ, academic achievement, and behaviour. There were no significant differences in rates of hearing impairment, abnormal tympanograms, figure-ground problems, or digit recall between VLBW children and NBW control children. VLBW children had higher rates of some central auditory processing problems, which in turn were associated with poorer intellectual, academic, and behavioural progress.
NASA Astrophysics Data System (ADS)
Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih
2017-04-01
Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.
DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J.
2015-01-01
Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends five years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum—one level of the Family Check-up model—on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n=500) was randomly assigned to the intervention and the other half (n=498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within School 1, but not within Schools 2 or 3. The effects of friend selection in School 1 translated into reductions in observed deviancy training five years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance five years later. PMID:26377235
NASA Astrophysics Data System (ADS)
de Oliveira Silveira, Eduarda Martiniano; de Menezes, Michele Duarte; Acerbi Júnior, Fausto Weimar; Castro Nunes Santos Terra, Marcela; de Mello, José Márcio
2017-07-01
Accurate mapping and monitoring of savanna and semiarid woodland biomes are needed to support the selection of areas of conservation, to provide sustainable land use, and to improve the understanding of vegetation. The potential of geostatistical features, derived from medium spatial resolution satellite imagery, to characterize contrasted landscape vegetation cover and improve object-based image classification is studied. The study site in Brazil includes cerrado sensu stricto, deciduous forest, and palm swamp vegetation cover. Sentinel 2 and Landsat 8 images were acquired and divided into objects, for each of which a semivariogram was calculated using near-infrared (NIR) and normalized difference vegetation index (NDVI) to extract the set of geostatistical features. The features selected by principal component analysis were used as input data to train a random forest algorithm. Tests were conducted, combining spectral and geostatistical features. Change detection evaluation was performed using a confusion matrix and its accuracies. The semivariogram curves were efficient to characterize spatial heterogeneity, with similar results using NIR and NDVI from Sentinel 2 and Landsat 8. Accuracy was significantly greater when combining geostatistical features with spectral data, suggesting that this method can improve image classification results.
Number of repetitions for evaluating technological traits in cotton genotypes.
Carvalho, L P; Farias, F J C; Morello, C L; Rodrigues, J I S; Teodoro, P E
2016-08-19
With the changes in spinning technology, technological cotton traits, such as fiber length, fiber uniformity, fiber strength, fineness, fiber maturity, percentage of fibers, and short fiber index, are of great importance for selecting cotton genotypes. However, for accurate discrimination of genotypes, it is important that these traits are evaluated with the best possible accuracy. The aim of this study was to determine the number of measurements (repetitions) needed to accurately assess technological traits of cotton genotypes. Seven experiments were conducted in four Brazilian States (Ceará, Rio Grande do Norte, Goiás, and Mato Grosso do Sul). We used nine brown and two white colored fiber lines in a randomized block design with four replications. After verifying the assumptions of residual normality and homogeneity of variances, analysis of variance was performed to estimate the repeatability coefficient and calculating the number of repetitions. Trials with four replications were found to be sufficient to identify superior cotton genotypes for all measured traits except short fiber index with a selective accuracy >90% and at least 81% accuracy in predicting their actual value. These results allow more accurate and reliable results in future researches with evaluating technological traits in cotton genotypes.
A management-oriented classification of pinyon-juniper woodlands of the Great Basin
Neil E. West; Robin J. Tausch; Paul T. Tueller
1998-01-01
A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...