Demaerschalk, Bart M; Brown, Robert D; Roubin, Gary S; Howard, Virginia J; Cesko, Eldina; Barrett, Kevin M; Longbottom, Mary E; Voeks, Jenifer H; Chaturvedi, Seemant; Brott, Thomas G; Lal, Brajesh K; Meschia, James F; Howard, George
2017-09-01
Multicenter clinical trials attempt to select sites that can move rapidly to randomization and enroll sufficient numbers of patients. However, there are few assessments of the success of site selection. In the CREST-2 (Carotid Revascularization and Medical Management for Asymptomatic Carotid Stenosis Trials), we assess factors associated with the time between site selection and authorization to randomize, the time between authorization to randomize and the first randomization, and the average number of randomizations per site per month. Potential factors included characteristics of the site, specialty of the principal investigator, and site type. For 147 sites, the median time between site selection to authorization to randomize was 9.9 months (interquartile range, 7.7, 12.4), and factors associated with early site activation were not identified. The median time between authorization to randomize and a randomization was 4.6 months (interquartile range, 2.6, 10.5). Sites with authorization to randomize in only the carotid endarterectomy study were slower to randomize, and other factors examined were not significantly associated with time-to-randomization. The recruitment rate was 0.26 (95% confidence interval, 0.23-0.28) patients per site per month. By univariate analysis, factors associated with faster recruitment were authorization to randomize in both trials, principal investigator specialties of interventional radiology and cardiology, pre-trial reported performance >50 carotid angioplasty and stenting procedures per year, status in the top half of recruitment in the CREST trial, and classification as a private health facility. Participation in StrokeNet was associated with slower recruitment as compared with the non-StrokeNet sites. Overall, selection of sites with high enrollment rates will likely require customization to align the sites selected to the factor under study in the trial. URL: http://www.clinicaltrials.gov. Unique identifier: NCT02089217. © 2017 American Heart Association, Inc.
On Measuring and Reducing Selection Bias with a Quasi-Doubly Randomized Preference Trial
ERIC Educational Resources Information Center
Joyce, Ted; Remler, Dahlia K.; Jaeger, David A.; Altindag, Onur; O'Connell, Stephen D.; Crockett, Sean
2017-01-01
Randomized experiments provide unbiased estimates of treatment effects, but are costly and time consuming. We demonstrate how a randomized experiment can be leveraged to measure selection bias by conducting a subsequent observational study that is identical in every way except that subjects choose their treatment--a quasi-doubly randomized…
Kronberg, J.W.
1993-04-20
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Kronberg, James W.
1993-01-01
An apparatus for selecting at random one item of N items on the average comprising counter and reset elements for counting repeatedly between zero and N, a number selected by the user, a circuit for activating and deactivating the counter, a comparator to determine if the counter stopped at a count of zero, an output to indicate an item has been selected when the count is zero or not selected if the count is not zero. Randomness is provided by having the counter cycle very often while varying the relatively longer duration between activation and deactivation of the count. The passive circuit components of the activating/deactivating circuit and those of the counter are selected for the sensitivity of their response to variations in temperature and other physical characteristics of the environment so that the response time of the circuitry varies. Additionally, the items themselves, which may be people, may vary in shape or the time they press a pushbutton, so that, for example, an ultrasonic beam broken by the item or person passing through it will add to the duration of the count and thus to the randomness of the selection.
Ryeznik, Yevgen; Sverdlov, Oleksandr; Wong, Weng Kee
2015-08-01
Response-adaptive randomization designs are becoming increasingly popular in clinical trial practice. In this paper, we present RARtool , a user interface software developed in MATLAB for designing response-adaptive randomized comparative clinical trials with censored time-to-event outcomes. The RARtool software can compute different types of optimal treatment allocation designs, and it can simulate response-adaptive randomization procedures targeting selected optimal allocations. Through simulations, an investigator can assess design characteristics under a variety of experimental scenarios and select the best procedure for practical implementation. We illustrate the utility of our RARtool software by redesigning a survival trial from the literature.
Inference from habitat-selection analysis depends on foraging strategies.
Bastille-Rousseau, Guillaume; Fortin, Daniel; Dussault, Christian
2010-11-01
1. Several methods have been developed to assess habitat selection, most of which are based on a comparison between habitat attributes in used vs. unused or random locations, such as the popular resource selection functions (RSFs). Spatial evaluation of residency time has been recently proposed as a promising avenue for studying habitat selection. Residency-time analyses assume a positive relationship between residency time within habitat patches and selection. We demonstrate that RSF and residency-time analyses provide different information about the process of habitat selection. Further, we show how the consideration of switching rate between habitat patches (interpatch movements) together with residency-time analysis can reveal habitat-selection strategies. 2. Spatially explicit, individual-based modelling was used to simulate foragers displaying one of six foraging strategies in a heterogeneous environment. The strategies combined one of three patch-departure rules (fixed-quitting-harvest-rate, fixed-time and fixed-amount strategy), together with one of two interpatch-movement rules (random or biased). Habitat selection of simulated foragers was then assessed using RSF, residency-time and interpatch-movement analyses. 3. Our simulations showed that RSFs and residency times are not always equivalent. When foragers move in a non-random manner and do not increase residency time in richer patches, residency-time analysis can provide misleading assessments of habitat selection. This is because the overall time spent in the various patch types not only depends on residency times, but also on interpatch-movement decisions. 4. We suggest that RSFs provide the outcome of the entire selection process, whereas residency-time and interpatch-movement analyses can be used in combination to reveal the mechanisms behind the selection process. 5. We showed that there is a risk in using residency-time analysis alone to infer habitat selection. Residency-time analyses, however, may enlighten the mechanisms of habitat selection by revealing central components of resource-use strategies. Given that management decisions are often based on resource-selection analyses, the evaluation of resource-use strategies can be key information for the development of efficient habitat-management strategies. Combining RSF, residency-time and interpatch-movement analyses is a simple and efficient way to gain a more comprehensive understanding of habitat selection. © 2010 The Authors. Journal compilation © 2010 British Ecological Society.
Vickers, Andrew J; Young-Afat, Danny A; Ehdaie, Behfar; Kim, Scott Yh
2018-02-01
Informed consent for randomized trials often causes significant and persistent anxiety, distress and confusion to patients. Where an experimental treatment is compared to a standard care control, much of this burden is potentially avoidable in the control group. We propose a "just-in-time" consent in which consent discussions take place in two stages: an initial consent to research from all participants and a later specific consent to randomized treatment only from those assigned to the experimental intervention. All patients are first approached and informed about research procedures, such as questionnaires or tests. They are also informed that they might be randomly selected to receive an experimental treatment and that, if selected, they can learn more about the treatment and decide whether or not to accept it at that time. After randomization, control patients undergo standard clinical consent whereas patients randomized to the experimental procedure undergo a second consent discussion. Analysis would be by intent-to-treat, which protects the trial from selection bias, although not from poor acceptance of experimental treatment. The advantages of just-in-time consent stem from the fact that only patients randomized to the experimental treatment are subject to a discussion of that intervention. We hypothesize that this will reduce much of the patient's burden associated with the consent process, such as decisional anxiety, confusion and information overload. We recommend well-controlled studies to compare just-in-time and traditional consent, with endpoints to include characteristics of participants, distress and anxiety and participants' understanding of research procedures.
Why the null matters: statistical tests, random walks and evolution.
Sheets, H D; Mitchell, C E
2001-01-01
A number of statistical tests have been developed to determine what type of dynamics underlie observed changes in morphology in evolutionary time series, based on the pattern of change within the time series. The theory of the 'scaled maximum', the 'log-rate-interval' (LRI) method, and the Hurst exponent all operate on the same principle of comparing the maximum change, or rate of change, in the observed dataset to the maximum change expected of a random walk. Less change in a dataset than expected of a random walk has been interpreted as indicating stabilizing selection, while more change implies directional selection. The 'runs test' in contrast, operates on the sequencing of steps, rather than on excursion. Applications of these tests to computer generated, simulated time series of known dynamical form and various levels of additive noise indicate that there is a fundamental asymmetry in the rate of type II errors of the tests based on excursion: they are all highly sensitive to noise in models of directional selection that result in a linear trend within a time series, but are largely noise immune in the case of a simple model of stabilizing selection. Additionally, the LRI method has a lower sensitivity than originally claimed, due to the large range of LRI rates produced by random walks. Examination of the published results of these tests show that they have seldom produced a conclusion that an observed evolutionary time series was due to directional selection, a result which needs closer examination in light of the asymmetric response of these tests.
Random Time Identity Based Firewall In Mobile Ad hoc Networks
NASA Astrophysics Data System (ADS)
Suman, Patel, R. B.; Singh, Parvinder
2010-11-01
A mobile ad hoc network (MANET) is a self-organizing network of mobile routers and associated hosts connected by wireless links. MANETs are highly flexible and adaptable but at the same time are highly prone to security risks due to the open medium, dynamically changing network topology, cooperative algorithms, and lack of centralized control. Firewall is an effective means of protecting a local network from network-based security threats and forms a key component in MANET security architecture. This paper presents a review of firewall implementation techniques in MANETs and their relative merits and demerits. A new approach is proposed to select MANET nodes at random for firewall implementation. This approach randomly select a new node as firewall after fixed time and based on critical value of certain parameters like power backup. This approach effectively balances power and resource utilization of entire MANET because responsibility of implementing firewall is equally shared among all the nodes. At the same time it ensures improved security for MANETs from outside attacks as intruder will not be able to find out the entry point in MANET due to the random selection of nodes for firewall implementation.
Menjot de Champfleur, Nicolas; Saver, Jeffrey L; Goyal, Mayank; Jahan, Reza; Diener, Hans-Christoph; Bonafe, Alain; Levy, Elad I; Pereira, Vitor M; Cognard, Christophe; Yavagal, Dileep R; Albers, Gregory W
2017-06-01
The majority of patients enrolled in SWIFT PRIME trial (Solitaire FR With the Intention for Thrombectomy as Primary Endovascular Treatment for Acute Ischemic Stroke) had computed tomographic perfusion (CTP) imaging before randomization; 34 patients were randomized after magnetic resonance imaging (MRI). Patients with middle cerebral artery and distal carotid occlusions were randomized to treatment with tPA (tissue-type plasminogen activator) alone or tPA+stentriever thrombectomy. The primary outcome was the distribution of the modified Rankin Scale score at 90 days. Patients with the target mismatch profile for enrollment were identified on MRI and CTP. MRI selection was performed in 34 patients; CTP in 139 patients. Baseline National Institutes of Health Stroke Scale score was 17 in both groups. Target mismatch profile was present in 95% (MRI) versus 83% (CTP). A higher percentage of the MRI group was transferred from an outside hospital ( P =0.02), and therefore, the time from stroke onset to randomization was longer in the MRI group ( P =0.003). Time from emergency room arrival to randomization did not differ in CTP versus MRI-selected patients. Baseline ischemic core volumes were similar in both groups. Reperfusion rates (>90%/TICI [Thrombolysis in Cerebral Infarction] score 3) did not differ in the stentriever-treated patients in the MRI versus CTP groups. The primary efficacy analysis (90-day mRS score) demonstrated a statistically significant benefit in both subgroups (MRI, P =0.02; CTP, P =0.01). Infarct growth was reduced in the stentriever-treated group in both MRI and CTP groups. Time to randomization was significantly longer in MRI-selected patients; however, site arrival to randomization times were not prolonged, and the benefits of endovascular therapy were similar. URL: http://www.clinicaltrials.gov. Unique identifier: NCT01657461. © 2017 American Heart Association, Inc.
Evolutionary games on cycles with strong selection
NASA Astrophysics Data System (ADS)
Altrock, P. M.; Traulsen, A.; Nowak, M. A.
2017-02-01
Evolutionary games on graphs describe how strategic interactions and population structure determine evolutionary success, quantified by the probability that a single mutant takes over a population. Graph structures, compared to the well-mixed case, can act as amplifiers or suppressors of selection by increasing or decreasing the fixation probability of a beneficial mutant. Properties of the associated mean fixation times can be more intricate, especially when selection is strong. The intuition is that fixation of a beneficial mutant happens fast in a dominance game, that fixation takes very long in a coexistence game, and that strong selection eliminates demographic noise. Here we show that these intuitions can be misleading in structured populations. We analyze mean fixation times on the cycle graph under strong frequency-dependent selection for two different microscopic evolutionary update rules (death-birth and birth-death). We establish exact analytical results for fixation times under strong selection and show that there are coexistence games in which fixation occurs in time polynomial in population size. Depending on the underlying game, we observe inherence of demographic noise even under strong selection if the process is driven by random death before selection for birth of an offspring (death-birth update). In contrast, if selection for an offspring occurs before random removal (birth-death update), then strong selection can remove demographic noise almost entirely.
The coalescent of a sample from a binary branching process.
Lambert, Amaury
2018-04-25
At time 0, start a time-continuous binary branching process, where particles give birth to a single particle independently (at a possibly time-dependent rate) and die independently (at a possibly time-dependent and age-dependent rate). A particular case is the classical birth-death process. Stop this process at time T>0. It is known that the tree spanned by the N tips alive at time T of the tree thus obtained (called a reduced tree or coalescent tree) is a coalescent point process (CPP), which basically means that the depths of interior nodes are independent and identically distributed (iid). Now select each of the N tips independently with probability y (Bernoulli sample). It is known that the tree generated by the selected tips, which we will call the Bernoulli sampled CPP, is again a CPP. Now instead, select exactly k tips uniformly at random among the N tips (a k-sample). We show that the tree generated by the selected tips is a mixture of Bernoulli sampled CPPs with the same parent CPP, over some explicit distribution of the sampling probability y. An immediate consequence is that the genealogy of a k-sample can be obtained by the realization of k random variables, first the random sampling probability Y and then the k-1 node depths which are iid conditional on Y=y. Copyright © 2018. Published by Elsevier Inc.
Baker, John [Walnut Creek, CA; Archer, Daniel E [Knoxville, TN; Luke, Stanley John [Pleasanton, CA; Decman, Daniel J [Livermore, CA; White, Gregory K [Livermore, CA
2009-06-23
A tailpulse signal generating/simulating apparatus, system, and method designed to produce electronic pulses which simulate tailpulses produced by a gamma radiation detector, including the pileup effect caused by the characteristic exponential decay of the detector pulses, and the random Poisson distribution pulse timing for radioactive materials. A digital signal process (DSP) is programmed and configured to produce digital values corresponding to pseudo-randomly selected pulse amplitudes and pseudo-randomly selected Poisson timing intervals of the tailpulses. Pulse amplitude values are exponentially decayed while outputting the digital value to a digital to analog converter (DAC). And pulse amplitudes of new pulses are added to decaying pulses to simulate the pileup effect for enhanced realism in the simulation.
Balancing Participation across Students in Large College Classes via Randomized Participation Credit
ERIC Educational Resources Information Center
McCleary, Daniel F.; Aspiranti, Kathleen B.; Foster, Lisa N.; Blondin, Carolyn A.; Gaylon, Charles E.; Yaw, Jared S.; Forbes, Bethany N.; Williams, Robert L.
2011-01-01
The study examines the effects of randomized credit on the percentage of students participating at four predefined levels. Students recorded their comments on specially designed record cards, and days were randomly selected for participation credit. This arrangement balanced participation across students while cutting instructor time for recording…
Selecting materialized views using random algorithm
NASA Astrophysics Data System (ADS)
Zhou, Lijuan; Hao, Zhongxiao; Liu, Chi
2007-04-01
The data warehouse is a repository of information collected from multiple possibly heterogeneous autonomous distributed databases. The information stored at the data warehouse is in form of views referred to as materialized views. The selection of the materialized views is one of the most important decisions in designing a data warehouse. Materialized views are stored in the data warehouse for the purpose of efficiently implementing on-line analytical processing queries. The first issue for the user to consider is query response time. So in this paper, we develop algorithms to select a set of views to materialize in data warehouse in order to minimize the total view maintenance cost under the constraint of a given query response time. We call it query_cost view_ selection problem. First, cost graph and cost model of query_cost view_ selection problem are presented. Second, the methods for selecting materialized views by using random algorithms are presented. The genetic algorithm is applied to the materialized views selection problem. But with the development of genetic process, the legal solution produced become more and more difficult, so a lot of solutions are eliminated and producing time of the solutions is lengthened in genetic algorithm. Therefore, improved algorithm has been presented in this paper, which is the combination of simulated annealing algorithm and genetic algorithm for the purpose of solving the query cost view selection problem. Finally, in order to test the function and efficiency of our algorithms experiment simulation is adopted. The experiments show that the given methods can provide near-optimal solutions in limited time and works better in practical cases. Randomized algorithms will become invaluable tools for data warehouse evolution.
Weighted feature selection criteria for visual servoing of a telerobot
NASA Technical Reports Server (NTRS)
Feddema, John T.; Lee, C. S. G.; Mitchell, O. R.
1989-01-01
Because of the continually changing environment of a space station, visual feedback is a vital element of a telerobotic system. A real time visual servoing system would allow a telerobot to track and manipulate randomly moving objects. Methodologies for the automatic selection of image features to be used to visually control the relative position between an eye-in-hand telerobot and a known object are devised. A weighted criteria function with both image recognition and control components is used to select the combination of image features which provides the best control. Simulation and experimental results of a PUMA robot arm visually tracking a randomly moving carburetor gasket with a visual update time of 70 milliseconds are discussed.
Bentley, R Alexander
2008-08-27
The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example.
Random Drift versus Selection in Academic Vocabulary: An Evolutionary Analysis of Published Keywords
Bentley, R. Alexander
2008-01-01
The evolution of vocabulary in academic publishing is characterized via keyword frequencies recorded in the ISI Web of Science citations database. In four distinct case-studies, evolutionary analysis of keyword frequency change through time is compared to a model of random copying used as the null hypothesis, such that selection may be identified against it. The case studies from the physical sciences indicate greater selection in keyword choice than in the social sciences. Similar evolutionary analyses can be applied to a wide range of phenomena; wherever the popularity of multiple items through time has been recorded, as with web searches, or sales of popular music and books, for example. PMID:18728786
A web-based appointment system to reduce waiting for outpatients: a retrospective study.
Cao, Wenjun; Wan, Yi; Tu, Haibo; Shang, Fujun; Liu, Danhong; Tan, Zhijun; Sun, Caihong; Ye, Qing; Xu, Yongyong
2011-11-22
Long waiting times for registration to see a doctor is problematic in China, especially in tertiary hospitals. To address this issue, a web-based appointment system was developed for the Xijing hospital. The aim of this study was to investigate the efficacy of the web-based appointment system in the registration service for outpatients. Data from the web-based appointment system in Xijing hospital from January to December 2010 were collected using a stratified random sampling method, from which participants were randomly selected for a telephone interview asking for detailed information on using the system. Patients who registered through registration windows were randomly selected as a comparison group, and completed a questionnaire on-site. A total of 5641 patients using the online booking service were available for data analysis. Of them, 500 were randomly selected, and 369 (73.8%) completed a telephone interview. Of the 500 patients using the usual queuing method who were randomly selected for inclusion in the study, responses were obtained from 463, a response rate of 92.6%. Between the two registration methods, there were significant differences in age, degree of satisfaction, and total waiting time (P<0.001). However, gender, urban residence, and valid waiting time showed no significant differences (P>0.05). Being ignorant of online registration, not trusting the internet, and a lack of ability to use a computer were three main reasons given for not using the web-based appointment system. The overall proportion of non-attendance was 14.4% for those using the web-based appointment system, and the non-attendance rate was significantly different among different hospital departments, day of the week, and time of the day (P<0.001). Compared to the usual queuing method, the web-based appointment system could significantly increase patient's satisfaction with registration and reduce total waiting time effectively. However, further improvements are needed for broad use of the system.
Electrical Evaluation of RCA MWS5501D Random Access Memory, Volume 2, Appendix a
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. The address access time, address readout time, the data hold time, and the data setup time are some of the results surveyed.
Topology-selective jamming of fully-connected, code-division random-access networks
NASA Technical Reports Server (NTRS)
Polydoros, Andreas; Cheng, Unjeng
1990-01-01
The purpose is to introduce certain models of topology selective stochastic jamming and examine its impact on a class of fully-connected, spread-spectrum, slotted ALOHA-type random access networks. The theory covers dedicated as well as half-duplex units. The dominant role of the spatial duty factor is established, and connections with the dual concept of time selective jamming are discussed. The optimal choices of coding rate and link access parameters (from the users' side) and the jamming spatial fraction are numerically established for DS and FH spreading.
Johnson, S M; Christensen, A; Bellamy, G T
1976-01-01
Five children referred to a child-family intervention program wore a radio transmitter in the home during pre-intervention and termination assessments. The transmitter broadcast to a receiver-recording apparatus in the home (either activated by an interval timer at predetermined "random" times or by parents at predetermined "picked" times). "Picked" times were parent-selected situations during which problems typically occurred (e.g., bedtime). Parents activated the recorder regularly whether or not problems occurred. Child-deviant, parent-negative, and parent-commanding behaviors were significantly higher at the picked times during pretest than at random times. At posttest, behaviors in all three classes were substantially reduced at picked times, but not at random times. For individual subject data, reductions occurred in at least two of the three dependent variables for three of the five cases during random time assessments. In general, the behavioral outcome data corresponded to parent-attitude reports and parent-collected observation data.
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 4, Appendix C
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Statistical analysis data is supplied along with write pulse width, read cycle time, write cycle time, and chip enable time data.
SNP selection and classification of genome-wide SNP data using stratified sampling random forests.
Wu, Qingyao; Ye, Yunming; Liu, Yang; Ng, Michael K
2012-09-01
For high dimensional genome-wide association (GWA) case-control data of complex disease, there are usually a large portion of single-nucleotide polymorphisms (SNPs) that are irrelevant with the disease. A simple random sampling method in random forest using default mtry parameter to choose feature subspace, will select too many subspaces without informative SNPs. Exhaustive searching an optimal mtry is often required in order to include useful and relevant SNPs and get rid of vast of non-informative SNPs. However, it is too time-consuming and not favorable in GWA for high-dimensional data. The main aim of this paper is to propose a stratified sampling method for feature subspace selection to generate decision trees in a random forest for GWA high-dimensional data. Our idea is to design an equal-width discretization scheme for informativeness to divide SNPs into multiple groups. In feature subspace selection, we randomly select the same number of SNPs from each group and combine them to form a subspace to generate a decision tree. The advantage of this stratified sampling procedure can make sure each subspace contains enough useful SNPs, but can avoid a very high computational cost of exhaustive search of an optimal mtry, and maintain the randomness of a random forest. We employ two genome-wide SNP data sets (Parkinson case-control data comprised of 408 803 SNPs and Alzheimer case-control data comprised of 380 157 SNPs) to demonstrate that the proposed stratified sampling method is effective, and it can generate better random forest with higher accuracy and lower error bound than those by Breiman's random forest generation method. For Parkinson data, we also show some interesting genes identified by the method, which may be associated with neurological disorders for further biological investigations.
In Darwinian evolution, feedback from natural selection leads to biased mutations.
Caporale, Lynn Helena; Doyle, John
2013-12-01
Natural selection provides feedback through which information about the environment and its recurring challenges is captured, inherited, and accumulated within genomes in the form of variations that contribute to survival. The variation upon which natural selection acts is generally described as "random." Yet evidence has been mounting for decades, from such phenomena as mutation hotspots, horizontal gene transfer, and highly mutable repetitive sequences, that variation is far from the simplifying idealization of random processes as white (uniform in space and time and independent of the environment or context). This paper focuses on what is known about the generation and control of mutational variation, emphasizing that it is not uniform across the genome or in time, not unstructured with respect to survival, and is neither memoryless nor independent of the (also far from white) environment. We suggest that, as opposed to frequentist methods, Bayesian analysis could capture the evolution of nonuniform probabilities of distinct classes of mutation, and argue not only that the locations, styles, and timing of real mutations are not correctly modeled as generated by a white noise random process, but that such a process would be inconsistent with evolutionary theory. © 2013 New York Academy of Sciences.
Random-access optical-resolution photoacoustic microscopy using a digital micromirror device
Liang, Jinyang; Zhou, Yong; Winkler, Amy W.; Wang, Lidai; Maslov, Konstantin I.; Li, Chiye; Wang, Lihong V.
2013-01-01
We developed random-access optical-resolution photoacoustic microscopy using a digital micromirror device. This system can rapidly scan arbitrarily shaped regions of interest within a 40×40 μm2 imaging area with a lateral resolution of 3.6 μm. To identify a region of interest, a global structural image is first acquired, then the selected region is scanned. The random-access ability was demonstrated by imaging two static samples, a carbon fiber cross and a monolayer of red blood cells, with an acquisition rate up to 4 kilohertz. The system was then used to monitor blood flow in vivo in real time within user-selected capillaries in a mouse ear. By imaging only the capillary of interest, the frame rate was increased by up to 9.2 times. PMID:23903111
Random-access optical-resolution photoacoustic microscopy using a digital micromirror device.
Liang, Jinyang; Zhou, Yong; Winkler, Amy W; Wang, Lidai; Maslov, Konstantin I; Li, Chiye; Wang, Lihong V
2013-08-01
We developed random-access optical-resolution photoacoustic microscopy using a digital micromirror device. This system can rapidly scan arbitrarily shaped regions of interest within a 40 μm×40 μm imaging area with a lateral resolution of 3.6 μm. To identify a region of interest, a global structural image is first acquired, then the selected region is scanned. The random-access ability was demonstrated by imaging two static samples, a carbon fiber cross and a monolayer of red blood cells, with an acquisition rate up to 4 kHz. The system was then used to monitor blood flow in vivo in real time within user-selected capillaries in a mouse ear. By imaging only the capillary of interest, the frame rate was increased by up to 9.2 times.
PASIS: A Distributed Framework for Perpetually Available and Secure Information Systems
2005-07-01
6.5.3 Random over-requesting vs . intelligent server selection ......................... 76 6.5.4 Which server selection algorithm to use; which r value...of 16 KB blocks. .................................................................................. 98 Figure 8-4. Mean response time vs . Total
A random rule model of surface growth
NASA Astrophysics Data System (ADS)
Mello, Bernardo A.
2015-02-01
Stochastic models of surface growth are usually based on randomly choosing a substrate site to perform iterative steps, as in the etching model, Mello et al. (2001) [5]. In this paper I modify the etching model to perform sequential, instead of random, substrate scan. The randomicity is introduced not in the site selection but in the choice of the rule to be followed in each site. The change positively affects the study of dynamic and asymptotic properties, by reducing the finite size effect and the short-time anomaly and by increasing the saturation time. It also has computational benefits: better use of the cache memory and the possibility of parallel implementation.
Woitas-Slubowska, Donata; Hurnik, Elzbieta; Skarpańska-Stejnborn, Anna
2010-12-01
To determine the association between smoking status and leisure time physical activity (LTPA), alcohol consumption, and socioeconomic status (SES) among Polish adults. 466 randomly selected men and women (aged 18-66 years) responded to an anonymous questionnaire regarding smoking, alcohol consumption, LTPA, and SES. Multiple logistic regression was used to examine the association of smoking status with six socioeconomic measures, level of LTPA, and frequency and type of alcohol consumed. Smokers were defined as individuals smoking occasionally or daily. The odds of being smoker were 9 times (men) and 27 times (women) higher among respondents who drink alcohol several times/ week or everyday in comparison to non-drinkers (p < 0.0001 and p < 0.0001). Among men with the elementary/vocational level of education the frequency of smoking was four times higher compared to those with the high educational attainment (p = 0.007). Among women we observed that students were the most frequent smokers. Female students were almost three times more likely to smoke than non-professional women, and two times more likely than physical workers (p = 0.018). The findings of this study indicated that among randomly selected Polish man and women aged 18-66 smoking and alcohol consumption tended to cluster. These results imply that intervention strategies need to target multiple risk factors simultaneously. The highest risk of smoking was observed among low educated men, female students, and both men and women drinking alcohol several times a week or every day. Information on subgroups with the high risk of smoking will help in planning future preventive strategies.
Dai, Huanping; Micheyl, Christophe
2010-01-01
A major concern when designing a psychophysical experiment is that participants may use another stimulus feature (“cue”) than that intended by the experimenter. One way to avoid this involves applying random variations to the corresponding feature across stimulus presentations, to make the “unwanted” cue unreliable. An important question facing experimenters who use this randomization (“roving”) technique is: How large should the randomization range be to ensure that participants cannot achieve a certain proportion correct (PC) by using the unwanted cue, while at the same time avoiding unnecessary interference of the randomization with task performance? Previous publications have provided formulas for the selection of adequate randomization ranges in yes-no and multiple-alternative, forced-choice tasks. In this article, we provide figures and tables, which can be used to select randomization ranges that are better suited to experiments involving a same-different, dual-pair, or oddity task. PMID:20139466
NASA Astrophysics Data System (ADS)
Yonezawa, A.; Kuroda, R.; Teramoto, A.; Obara, T.; Sugawa, S.
2014-03-01
We evaluated effective time constants of random telegraph noise (RTN) with various operation timings of in-pixel source follower transistors statistically, and discuss the dependency of RTN time constants on the duty ratio (on/off ratio) of MOSFET which is controlled by the gate to source voltage (VGS). Under a general readout operation of CMOS image sensor (CIS), the row selected pixel-source followers (SFs) turn on and not selected pixel-SFs operate at different bias conditions depending on the select switch position; when select switch locate in between the SF driver and column output line, SF drivers nearly turn off. The duty ratio and cyclic period of selected time of SF driver depends on the operation timing determined by the column read out sequence. By changing the duty ratio from 1 to 7.6 x 10-3, time constant ratio of RTN (time to capture <τc<)/(time to emission <τe<) of a part of MOSFETs increased while RTN amplitudes were almost the same regardless of the duty ratio. In these MOSFETs, <τc< increased and the majority of <τe< decreased and the minority of <τe< increased by decreasing the duty ratio. The same tendencies of behaviors of <τc< and <τe< were obtained when VGS was decreased. This indicates that the effective <τc< and <τe< converge to those under off state as duty ratio decreases. These results are important for the noise reduction, detection and analysis of in pixel-SF with RTN.
Chen, Minghao; Wei, Shiyou; Hu, Junyan; Yuan, Jing; Liu, Fenghua
2017-01-01
The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94-1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06-1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80-1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use.
A Time-Series Water Level Forecasting Model Based on Imputation and Variable Selection Method.
Yang, Jun-He; Cheng, Ching-Hsue; Chan, Chia-Pan
2017-01-01
Reservoirs are important for households and impact the national economy. This paper proposed a time-series forecasting model based on estimating a missing value followed by variable selection to forecast the reservoir's water level. This study collected data from the Taiwan Shimen Reservoir as well as daily atmospheric data from 2008 to 2015. The two datasets are concatenated into an integrated dataset based on ordering of the data as a research dataset. The proposed time-series forecasting model summarily has three foci. First, this study uses five imputation methods to directly delete the missing value. Second, we identified the key variable via factor analysis and then deleted the unimportant variables sequentially via the variable selection method. Finally, the proposed model uses a Random Forest to build the forecasting model of the reservoir's water level. This was done to compare with the listing method under the forecasting error. These experimental results indicate that the Random Forest forecasting model when applied to variable selection with full variables has better forecasting performance than the listing model. In addition, this experiment shows that the proposed variable selection can help determine five forecast methods used here to improve the forecasting capability.
Li, Xiao-Zhou; Li, Song-Sui; Zhuang, Jun-Ping; Chan, Sze-Chun
2015-09-01
A semiconductor laser with distributed feedback from a fiber Bragg grating (FBG) is investigated for random bit generation (RBG). The feedback perturbs the laser to emit chaotically with the intensity being sampled periodically. The samples are then converted into random bits by a simple postprocessing of self-differencing and selecting bits. Unlike a conventional mirror that provides localized feedback, the FBG provides distributed feedback which effectively suppresses the information of the round-trip feedback delay time. Randomness is ensured even when the sampling period is commensurate with the feedback delay between the laser and the grating. Consequently, in RBG, the FBG feedback enables continuous tuning of the output bit rate, reduces the minimum sampling period, and increases the number of bits selected per sample. RBG is experimentally investigated at a sampling period continuously tunable from over 16 ns down to 50 ps, while the feedback delay is fixed at 7.7 ns. By selecting 5 least-significant bits per sample, output bit rates from 0.3 to 100 Gbps are achieved with randomness examined by the National Institute of Standards and Technology test suite.
Hansen, Adam G.; Beauchamp, David A.
2014-01-01
Most predators eat only a subset of possible prey. However, studies evaluating diet selection rarely measure prey availability in a manner that accounts for temporal–spatial overlap with predators, the sensory mechanisms employed to detect prey, and constraints on prey capture.We evaluated the diet selection of cutthroat trout (Oncorhynchus clarkii) feeding on a diverse planktivore assemblage in Lake Washington to test the hypothesis that the diet selection of piscivores would reflect random (opportunistic) as opposed to non-random (targeted) feeding, after accounting for predator–prey overlap, visual detection and capture constraints.Diets of cutthroat trout were sampled in autumn 2005, when the abundance of transparent, age-0 longfin smelt (Spirinchus thaleichthys) was low, and 2006, when the abundance of smelt was nearly seven times higher. Diet selection was evaluated separately using depth-integrated and depth-specific (accounted for predator–prey overlap) prey abundance. The abundance of different prey was then adjusted for differences in detectability and vulnerability to predation to see whether these factors could explain diet selection.In 2005, cutthroat trout fed non-randomly by selecting against the smaller, transparent age-0 longfin smelt, but for the larger age-1 longfin smelt. After adjusting prey abundance for visual detection and capture, cutthroat trout fed randomly. In 2006, depth-integrated and depth-specific abundance explained the diets of cutthroat trout well, indicating random feeding. Feeding became non-random after adjusting for visual detection and capture. Cutthroat trout selected strongly for age-0 longfin smelt, but against similar sized threespine stickleback (Gasterosteus aculeatus) and larger age-1 longfin smelt in 2006. Overlap with juvenile sockeye salmon (O. nerka) was minimal in both years, and sockeye salmon were rare in the diets of cutthroat trout.The direction of the shift between random and non-random selection depended on the presence of a weak versus a strong year class of age-0 longfin smelt. These fish were easy to catch, but hard to see. When their density was low, poor detection could explain their rarity in the diet. When their density was high, poor detection was compensated by higher encounter rates with cutthroat trout, sufficient to elicit a targeted feeding response. The nature of the feeding selectivity of a predator can be highly dependent on fluctuations in the abundance and suitability of key prey.
Ray tracing method for simulation of laser beam interaction with random packings of powders
NASA Astrophysics Data System (ADS)
Kovalev, O. B.; Kovaleva, I. O.; Belyaev, V. V.
2018-03-01
Selective laser sintering is a technology of rapid manufacturing of a free form that is created as a solid object by selectively fusing successive layers of powder using a laser. The motivation of this study is due to the currently insufficient understanding of the processes and phenomena of selective laser melting of powders whose time scales differ by orders of magnitude. To construct random packings from mono- and polydispersed solid spheres, the algorithm of their generation based on the discrete element method is used. A numerical method of ray tracing is proposed that is used to simulate the interaction of laser radiation with a random bulk packing of spherical particles and to predict the optical properties of the granular layer, the extinction and absorption coefficients, depending on the optical properties of a powder material.
Altenburg, Teatske M; Chinapaw, Mai J M; Singh, Amika S
2016-10-01
Evidence suggests that physical activity is positively related to several aspects of cognitive functioning in children, among which is selective attention. To date, no information is available on the optimal frequency of physical activity on cognitive functioning in children. The current study examined the acute effects of one and two bouts of moderate-intensity physical activity on children's selective attention. Randomized controlled trial (ISRCTN97975679). Thirty boys and twenty-six girls, aged 10-13 years, were randomly assigned to three conditions: (A) sitting all morning working on simulated school tasks; (B) one 20-min physical activity bout after 90min; and (C) two 20-min physical activity bouts, i.e. at the start and after 90min. Selective attention was assessed at five time points during the morning (i.e. at baseline and after 20, 110, 130 and 220min), using the 'Sky Search' subtest of the 'Test of Selective Attention in Children'. We used GEE analysis to examine differences in Sky Search scores between the three experimental conditions, adjusting for school, baseline scores, self-reported screen time and time spent in sports. Children who performed two 20-min bouts of moderate-intensity physical activity had significantly better Sky Search scores compared to children who performed one physical activity bout or remained seated the whole morning (B=-0.26; 95% CI=[-0.52; -0.00]). Our findings support the importance of repeated physical activity during the school day for beneficial effects on selective attention in children. Copyright © 2015 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Ashcraft, Adam; Fernández-Val, Iván; Lang, Kevin
2012-01-01
Miscarriage, even if biologically random, is not socially random. Willingness to abort reduces miscarriage risk. Because abortions are favorably selected among pregnant teens, those miscarrying are less favorably selected than those giving birth or aborting but more favorably selected than those giving birth. Therefore, using miscarriage as an instrument is biased towards a benign view of teen motherhood while OLS on just those giving birth or miscarrying has the opposite bias. We derive a consistent estimator that reduces to a weighted average of OLS and IV when outcomes are independent of abortion timing. Estimated effects are generally adverse but modest. PMID:24443589
Pribenszky, Csaba; Nilselid, Anna-Maria; Montag, Markus
2017-11-01
Embryo evaluation and selection is fundamental in clinical IVF. Time-lapse follow-up of embryo development comprises undisturbed culture and the application of the visual information to support embryo evaluation. A meta-analysis of randomized controlled trials was carried out to study whether time-lapse monitoring with the prospective use of a morphokinetic algorithm for selection of embryos improves overall clinical outcome (pregnancy, early pregnancy loss, stillbirth and live birth rate) compared with embryo selection based on single time-point morphology in IVF cycles. The meta-analysis of five randomized controlled trials (n = 1637) showed that the application of time-lapse monitoring was associated with a significantly higher ongoing clinical pregnancy rate (51.0% versus 39.9%), with a pooled odds ratio of 1.542 (P < 0.001), significantly lower early pregnancy loss (15.3% versus 21.3%; OR: 0.662; P = 0.019) and a significantly increased live birth rate (44.2% versus 31.3%; OR 1.668; P = 0.009). Difference in stillbirth was not significant between groups (4.7% versus 2.4%). Quality of the evidence was moderate to low owing to inconsistencies across the studies. Selective application and variability were also limitations. Although time-lapse is shown to significantly improve overall clinical outcome, further high-quality evidence is needed before universal conclusions can be drawn. Copyright © 2017 The Author(s). Published by Elsevier Ltd.. All rights reserved.
Bayesian dynamic modeling of time series of dengue disease case counts.
Martínez-Bello, Daniel Adyro; López-Quílez, Antonio; Torres-Prieto, Alexander
2017-07-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model's short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health.
ERIC Educational Resources Information Center
Ferrari, Lea; Nota, Laura; Soresi, Salvatore
2012-01-01
A structured 10-didactic unit intervention was devised to foster adolescents' time perspective and career decidedness. The study was conducted with 50 adolescents who were selected from a group of 624; 25 of the participants were randomly assigned to the control group and 25 were assigned to the experimental group. They were selected according to…
A Survey of Factors Influencing High School Start Times
ERIC Educational Resources Information Center
Wolfson, Amy R.; Carskadon, Mary A.
2005-01-01
The present study surveyed high school personnel regarding high school start times, factors influencing school start times, and decision making around school schedules. Surveys were analyzed from 345 secondary schools selected at random from the National Center for Educational Statistics database. Factors affecting reported start times included…
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
Silvis, Alexander; Ford, W. Mark; Britzke, Eric R.
2015-01-01
Bat day-roost selection often is described through comparisons of day-roosts with randomly selected, and assumed unused, trees. Relatively few studies, however, look at patterns of multi-year selection or compare day-roosts used across years. We explored day-roost selection using 2 years of roost selection data for female northern long-eared bats (Myotis septentrionalis) on the Fort Knox Military Reservation, Kentucky, USA. We compared characteristics of randomly selected non-roost trees and day-roosts using a multinomial logistic model and day-roost species selection using chi-squared tests. We found that factors differentiating day-roosts from non-roosts and day-roosts between years varied. Day-roosts differed from non-roosts in the first year of data in all measured factors, but only in size and decay stage in the second year. Between years, day-roosts differed in size and canopy position, but not decay stage. Day-roost species selection was non-random and did not differ between years. Although bats used multiple trees, our results suggest that there were additional unused trees that were suitable as roosts at any time. Day-roost selection pattern descriptions will be inadequate if based only on a single year of data, and inferences of roost selection based only on comparisons of roost to non-roosts should be limited.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bromberger, Seth A.; Klymko, Christine F.; Henderson, Keith A.
Betweenness centrality is a graph statistic used to nd vertices that are participants in a large number of shortest paths in a graph. This centrality measure is commonly used in path and network interdiction problems and its complete form requires the calculation of all-pairs shortest paths for each vertex. This leads to a time complexity of O(jV jjEj), which is impractical for large graphs. Estimation of betweenness centrality has focused on performing shortest-path calculations on a subset of randomly- selected vertices. This reduces the complexity of the centrality estimation to O(jSjjEj); jSj < jV j, which can be scaled appropriatelymore » based on the computing resources available. An estimation strategy that uses random selection of vertices for seed selection is fast and simple to implement, but may not provide optimal estimation of betweenness centrality when the number of samples is constrained. Our experimentation has identi ed a number of alternate seed-selection strategies that provide lower error than random selection in common scale-free graphs. These strategies are discussed and experimental results are presented.« less
Baird, Rachel; Maxwell, Scott E
2016-06-01
Time-varying predictors in multilevel models are a useful tool for longitudinal research, whether they are the research variable of interest or they are controlling for variance to allow greater power for other variables. However, standard recommendations to fix the effect of time-varying predictors may make an assumption that is unlikely to hold in reality and may influence results. A simulation study illustrates that treating the time-varying predictor as fixed may allow analyses to converge, but the analyses have poor coverage of the true fixed effect when the time-varying predictor has a random effect in reality. A second simulation study shows that treating the time-varying predictor as random may have poor convergence, except when allowing negative variance estimates. Although negative variance estimates are uninterpretable, results of the simulation show that estimates of the fixed effect of the time-varying predictor are as accurate for these cases as for cases with positive variance estimates, and that treating the time-varying predictor as random and allowing negative variance estimates performs well whether the time-varying predictor is fixed or random in reality. Because of the difficulty of interpreting negative variance estimates, 2 procedures are suggested for selection between fixed-effect and random-effect models: comparing between fixed-effect and constrained random-effect models with a likelihood ratio test or fitting a fixed-effect model when an unconstrained random-effect model produces negative variance estimates. The performance of these 2 procedures is compared. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
Electrical Evaluation of RCA MWS5001D Random Access Memory, Volume 5, Appendix D
NASA Technical Reports Server (NTRS)
Klute, A.
1979-01-01
The electrical characterization and qualification test results are presented for the RCA MWS 5001D random access memory. The tests included functional tests, AC and DC parametric tests, AC parametric worst-case pattern selection test, determination of worst-case transition for setup and hold times, and a series of schmoo plots. Average input high current, worst case input high current, output low current, and data setup time are some of the results presented.
Interacting particle systems on graphs
NASA Astrophysics Data System (ADS)
Sood, Vishal
In this dissertation, the dynamics of socially or biologically interacting populations are investigated. The individual members of the population are treated as particles that interact via links on a social or biological network represented as a graph. The effect of the structure of the graph on the properties of the interacting particle system is studied using statistical physics techniques. In the first chapter, the central concepts of graph theory and social and biological networks are presented. Next, interacting particle systems that are drawn from physics, mathematics and biology are discussed in the second chapter. In the third chapter, the random walk on a graph is studied. The mean time for a random walk to traverse between two arbitrary sites of a random graph is evaluated. Using an effective medium approximation it is found that the mean first-passage time between pairs of sites, as well as all moments of this first-passage time, are insensitive to the density of links in the graph. The inverse of the mean-first passage time varies non-monotonically with the density of links near the percolation transition of the random graph. Much of the behavior can be understood by simple heuristic arguments. Evolutionary dynamics, by which mutants overspread an otherwise uniform population on heterogeneous graphs, are studied in the fourth chapter. Such a process underlies' epidemic propagation, emergence of fads, social cooperation or invasion of an ecological niche by a new species. The first part of this chapter is devoted to neutral dynamics, in which the mutant genotype does not have a selective advantage over the resident genotype. The time to extinction of one of the two genotypes is derived. In the second part of this chapter, selective advantage or fitness is introduced such that the mutant genotype has a higher birth rate or a lower death rate. This selective advantage leads to a dynamical competition in which selection dominates for large populations, while for small populations the dynamics are similar to the neutral case. The likelihood for the fitter mutants to drive the resident genotype to extinction is calculated.
Scott, J.C.
1990-01-01
Computer software was written to randomly select sites for a ground-water-quality sampling network. The software uses digital cartographic techniques and subroutines from a proprietary geographic information system. The report presents the approaches, computer software, and sample applications. It is often desirable to collect ground-water-quality samples from various areas in a study region that have different values of a spatial characteristic, such as land-use or hydrogeologic setting. A stratified network can be used for testing hypotheses about relations between spatial characteristics and water quality, or for calculating statistical descriptions of water-quality data that account for variations that correspond to the spatial characteristic. In the software described, a study region is subdivided into areal subsets that have a common spatial characteristic to stratify the population into several categories from which sampling sites are selected. Different numbers of sites may be selected from each category of areal subsets. A population of potential sampling sites may be defined by either specifying a fixed population of existing sites, or by preparing an equally spaced population of potential sites. In either case, each site is identified with a single category, depending on the value of the spatial characteristic of the areal subset in which the site is located. Sites are selected from one category at a time. One of two approaches may be used to select sites. Sites may be selected randomly, or the areal subsets in the category can be grouped into cells and sites selected randomly from each cell.
NASA Astrophysics Data System (ADS)
Rodriguez, Nicolas B.; McGuire, Kevin J.; Klaus, Julian
2017-04-01
Transit time distributions, residence time distributions and StorAge Selection functions are fundamental integrated descriptors of water storage, mixing, and release in catchments. In this contribution, we determined these time-variant functions in four neighboring forested catchments in H.J. Andrews Experimental Forest, Oregon, USA by employing a two year time series of 18O in precipitation and discharge. Previous studies in these catchments assumed stationary, exponentially distributed transit times, and complete mixing/random sampling to explore the influence of various catchment properties on the mean transit time. Here we relaxed such assumptions to relate transit time dynamics and the variability of StoreAge Selection functions to catchment characteristics, catchment storage, and meteorological forcing seasonality. Conceptual models of the catchments, consisting of two reservoirs combined in series-parallel, were calibrated to discharge and stable isotope tracer data. We assumed randomly sampled/fully mixed conditions for each reservoir, which resulted in an incompletely mixed system overall. Based on the results we solved the Master Equation, which describes the dynamics of water ages in storage and in catchment outflows Consistent between all catchments, we found that transit times were generally shorter during wet periods, indicating the contribution of shallow storage (soil, saprolite) to discharge. During extended dry periods, transit times increased significantly indicating the contribution of deeper storage (bedrock) to discharge. Our work indicated that the strong seasonality of precipitation impacted transit times by leading to a dynamic selection of stored water ages, whereas catchment size was not a control on transit times. In general this work showed the usefulness of using time-variant transit times with conceptual models and confirmed the existence of the catchment age mixing behaviors emerging from other similar studies.
Yuan, Jing; Liu, Fenghua
2017-01-01
Objective The present study aimed to undertake a review of available evidence assessing whether time-lapse imaging (TLI) has favorable outcomes for embryo incubation and selection compared with conventional methods in clinical in vitro fertilization (IVF). Methods Using PubMed, EMBASE, Cochrane library and ClinicalTrial.gov up to February 2017 to search for randomized controlled trials (RCTs) comparing TLI versus conventional methods. Both studies randomized women and oocytes were included. For studies randomized women, the primary outcomes were live birth and ongoing pregnancy, the secondary outcomes were clinical pregnancy and miscarriage; for studies randomized oocytes, the primary outcome was blastocyst rate, the secondary outcome was good quality embryo on Day 2/3. Subgroup analysis was conducted based on different incubation and embryo selection between groups. Results Ten RCTs were included, four randomized oocytes and six randomized women. For oocyte-based review, the pool-analysis observed no significant difference between TLI group and control group for blastocyst rate [relative risk (RR) 1.08, 95% CI 0.94–1.25, I2 = 0%, two studies, including 1154 embryos]. The quality of evidence was moderate for all outcomes in oocyte-based review. For woman-based review, only one study provided live birth rate (RR 1,23, 95% CI 1.06–1.44,I2 N/A, one study, including 842 women), the pooled result showed no significant difference in ongoing pregnancy rate (RR 1.04, 95% CI 0.80–1.36, I2 = 59%, four studies, including 1403 women) between two groups. The quality of the evidence was low or very low for all outcomes in woman-based review. Conclusions Currently there is insufficient evidence to support that TLI is superior to conventional methods for human embryo incubation and selection. In consideration of the limitations and flaws of included studies, more well designed RCTs are still in need to comprehensively evaluate the effectiveness of clinical TLI use. PMID:28570713
Fixation probability in a two-locus intersexual selection model.
Durand, Guillermo; Lessard, Sabin
2016-06-01
We study a two-locus model of intersexual selection in a finite haploid population reproducing according to a discrete-time Moran model with a trait locus expressed in males and a preference locus expressed in females. We show that the probability of ultimate fixation of a single mutant allele for a male ornament introduced at random at the trait locus given any initial frequency state at the preference locus is increased by weak intersexual selection and recombination, weak or strong. Moreover, this probability exceeds the initial frequency of the mutant allele even in the case of a costly male ornament if intersexual selection is not too weak. On the other hand, the probability of ultimate fixation of a single mutant allele for a female preference towards a male ornament introduced at random at the preference locus is increased by weak intersexual selection and weak recombination if the female preference is not costly, and is strong enough in the case of a costly male ornament. The analysis relies on an extension of the ancestral recombination-selection graph for samples of haplotypes to take into account events of intersexual selection, while the symbolic calculation of the fixation probabilities is made possible in a reasonable time by an optimizing algorithm. Copyright © 2016 Elsevier Inc. All rights reserved.
Spectrum of walk matrix for Koch network and its application
NASA Astrophysics Data System (ADS)
Xie, Pinchen; Lin, Yuan; Zhang, Zhongzhi
2015-06-01
Various structural and dynamical properties of a network are encoded in the eigenvalues of walk matrix describing random walks on the network. In this paper, we study the spectra of walk matrix of the Koch network, which displays the prominent scale-free and small-world features. Utilizing the particular architecture of the network, we obtain all the eigenvalues and their corresponding multiplicities. Based on the link between the eigenvalues of walk matrix and random target access time defined as the expected time for a walker going from an arbitrary node to another one selected randomly according to the steady-state distribution, we then derive an explicit solution to the random target access time for random walks on the Koch network. Finally, we corroborate our computation for the eigenvalues by enumerating spanning trees in the Koch network, using the connection governing eigenvalues and spanning trees, where a spanning tree of a network is a subgraph of the network, that is, a tree containing all the nodes.
Selection of the simplest RNA that binds isoleucine
LOZUPONE, CATHERINE; CHANGAYIL, SHANKAR; MAJERFELD, IRENE; YARUS, MICHAEL
2003-01-01
We have identified the simplest RNA binding site for isoleucine using selection-amplification (SELEX), by shrinking the size of the randomized region until affinity selection is extinguished. Such a protocol can be useful because selection does not necessarily make the simplest active motif most prominent, as is often assumed. We find an isoleucine binding site that behaves exactly as predicted for the site that requires fewest nucleotides. This UAUU motif (16 highly conserved positions; 27 total), is also the most abundant site in successful selections on short random tracts. The UAUU site, now isolated independently at least 63 times, is a small asymmetric internal loop. Conserved loop sequences include isoleucine codon and anticodon triplets, whose nucleotides are required for amino acid binding. This reproducible association between isoleucine and its coding sequences supports the idea that the genetic code is, at least in part, a stereochemical residue of the most easily isolated RNA–amino acid binding structures. PMID:14561881
Yang, Lanlin; Cai, Sufen; Zhang, Shuoping; Kong, Xiangyi; Gu, Yifan; Lu, Changfu; Dai, Jing; Gong, Fei; Lu, Guangxiu; Lin, Ge
2018-05-01
Does single cleavage-stage (Day 3) embryo transfer using a time-lapse (TL) hierarchical classification model achieve comparable ongoing pregnancy rates (OPR) to single blastocyst (Day 5) transfer by conventional morphological (CM) selection? Day 3 single embryo transfer (SET) with a hierarchical classification model had a significantly lower OPR compared with Day 5 SET with CM selection. Cleavage-stage SET is an alternative to blastocyst SET. Time-lapse imaging assists better embryo selection, based on studies of pregnancy outcomes when adding time-lapse imaging to CM selection at the cleavage or blastocyst stage. This single-centre, randomized, open-label, active-controlled, non-inferiority study included 600 women between October 2015 and April 2017. Eligible patients were Chinese females, aged ≤36 years, who were undergoing their first or second fresh IVF cycle using their own oocytes, and who had FSH levels ≤12 IU/mL on Day 3 of the cycle and 10 or more oocytes retrieved. Patients who had underlying uterine conditions, oocyte donation, recurrent pregnancy loss, abnormal oocytes or <6 normally fertilized embryos (2PN) were excluded from the study participation. Patients were randomized 1:1 to either the cleavage-stage SET with a time-lapse hierarchical classification model for selection (D3 + TL) or blastocyst SET with CM selection (D5 + CM). All normally fertilized zygotes were cultured in Primo Vision. The study was conducted at a tertiary IVF centre (CITIC-Xiangya) and OPR was the primary outcome. A total of 600 patients were randomized to the two groups, among which 585 (D3 + TL = 290, D5 + CM = 295) were included in the Modified-intention-to-treat (mITT) population and 517 (D3 + TL = 261, D5 + CM = 256) were included in the PP population. In the per protocol (PP) population, OPR was significantly lower in the D3 group (59.4%, 155/261) than in the D5 group (68.4%, 175/256) (difference: -9.0%, 95% CI: -17.1%, -0.7%, P = 0.03). Analysis in mITT population showed a marginally significant difference in the OPR between the D3 + TL and D5 + CM groups (56.6 versus 64.1%, difference: -7.5%, 95% CI: -15.4%, 0.4%, P = 0.06). The D3 + TL group resulted in a markedly lower implantation rate than the D5 + CM group (64.4 versus 77.0%; P = 0.002) in the PP analysis, however, the early miscarriage rate did not significantly differ between the two groups. The study lacked a direct comparison between time-lapse and CM selections at cleavage-stage SET and was statistically underpowered to detect non-inferiority. The subject's eligibility criteria favouring women with a good prognosis for IVF weakened the generalizability of the results. The OPR from Day 3 cleavage-stage SET using hierarchical classification time-lapse selection was significantly lower compared with that from Day 5 blastocyst SET using conventional morphology, yet it appeared to be clinically acceptable in women underwent IVF. This study is supported by grants from Ferring Pharmaceuticals and the Program for New Century Excellent Talents in University, China. ChiCTR-ICR-15006600. 16 June 2015. 1 October 2015.
An Efficient Randomized Algorithm for Real-Time Process Scheduling in PicOS Operating System
NASA Astrophysics Data System (ADS)
Helmy*, Tarek; Fatai, Anifowose; Sallam, El-Sayed
PicOS is an event-driven operating environment designed for use with embedded networked sensors. More specifically, it is designed to support the concurrency in intensive operations required by networked sensors with minimal hardware requirements. Existing process scheduling algorithms of PicOS; a commercial tiny, low-footprint, real-time operating system; have their associated drawbacks. An efficient, alternative algorithm, based on a randomized selection policy, has been proposed, demonstrated, confirmed for efficiency and fairness, on the average, and has been recommended for implementation in PicOS. Simulations were carried out and performance measures such as Average Waiting Time (AWT) and Average Turn-around Time (ATT) were used to assess the efficiency of the proposed randomized version over the existing ones. The results prove that Randomized algorithm is the best and most attractive for implementation in PicOS, since it is most fair and has the least AWT and ATT on average over the other non-preemptive scheduling algorithms implemented in this paper.
Code of Federal Regulations, 2014 CFR
2014-01-01
... INSPECTION Standards Rules § 29.6104 Rule 18. Burn shall be determined as the average burning time of leaves selected at random from the sample. A minimum of 10 leaves shall be selected as representative regardless... on the same side of the leaf. The leaf shall be punctured to permit quick ignition when placed over a...
Code of Federal Regulations, 2010 CFR
2010-01-01
... INSPECTION Standards Rules § 29.6104 Rule 18. Burn shall be determined as the average burning time of leaves selected at random from the sample. A minimum of 10 leaves shall be selected as representative regardless... on the same side of the leaf. The leaf shall be punctured to permit quick ignition when placed over a...
Code of Federal Regulations, 2013 CFR
2013-01-01
... INSPECTION Standards Rules § 29.6104 Rule 18. Burn shall be determined as the average burning time of leaves selected at random from the sample. A minimum of 10 leaves shall be selected as representative regardless... on the same side of the leaf. The leaf shall be punctured to permit quick ignition when placed over a...
Code of Federal Regulations, 2011 CFR
2011-01-01
... INSPECTION Standards Rules § 29.6104 Rule 18. Burn shall be determined as the average burning time of leaves selected at random from the sample. A minimum of 10 leaves shall be selected as representative regardless... on the same side of the leaf. The leaf shall be punctured to permit quick ignition when placed over a...
Code of Federal Regulations, 2012 CFR
2012-01-01
... INSPECTION Standards Rules § 29.6104 Rule 18. Burn shall be determined as the average burning time of leaves selected at random from the sample. A minimum of 10 leaves shall be selected as representative regardless... on the same side of the leaf. The leaf shall be punctured to permit quick ignition when placed over a...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
47 CFR 1.1602 - Designation for random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Designation for random selection. 1.1602 Section 1.1602 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1602 Designation for random selection...
Unexpected substrate specificity of T4 DNA ligase revealed by in vitro selection
NASA Technical Reports Server (NTRS)
Harada, Kazuo; Orgel, Leslie E.
1993-01-01
We have used in vitro selection techniques to characterize DNA sequences that are ligated efficiently by T4 DNA ligase. We find that the ensemble of selected sequences ligates about 50 times as efficiently as the random mixture of sequences used as the input for selection. Surprisingly many of the selected sequences failed to produce a match at or close to the ligation junction. None of the 20 selected oligomers that we sequenced produced a match two bases upstream from the ligation junction.
Physical key-protected one-time pad
Horstmeyer, Roarke; Judkewitz, Benjamin; Vellekoop, Ivo M.; Assawaworrarit, Sid; Yang, Changhuei
2013-01-01
We describe an encrypted communication principle that forms a secure link between two parties without electronically saving either of their keys. Instead, random cryptographic bits are kept safe within the unique mesoscopic randomness of two volumetric scattering materials. We demonstrate how a shared set of patterned optical probes can generate 10 gigabits of statistically verified randomness between a pair of unique 2 mm3 scattering objects. This shared randomness is used to facilitate information-theoretically secure communication following a modified one-time pad protocol. Benefits of volumetric physical storage over electronic memory include the inability to probe, duplicate or selectively reset any bits without fundamentally altering the entire key space. Our ability to securely couple the randomness contained within two unique physical objects can extend to strengthen hardware required by a variety of cryptographic protocols, which is currently a critically weak link in the security pipeline of our increasingly mobile communication culture. PMID:24345925
Turhan, K S Cakar; Akmese, R; Ozkan, F; Okten, F F
2015-04-01
In the current prospective, randomized study, we aimed to compare the effects of low dose selective spinal anesthesia with 5 mg of hyperbaric bupivacaine and single-shot femoral nerve block combination with conventional dose selective spinal anesthesia in terms of intraoperative anesthesia characteristics, block recovery characteristics, and postoperative analgesic consumption. After obtaining institutional Ethics Committee approval, 52 ASA I-II patients aged 25-65, undergoing arthroscopic meniscus repair were randomly assigned to Group S (conventional dose selective spinal anesthesia with 10 mg bupivacaine) and Group FS (low-dose selective spinal anesthesia with 5mg bupivacaine +single-shot femoral block with 0.25% bupivacaine). Primary endpoints were time to reach T12 sensory block level, L2 regression, and complete motor block regression. Secondary endpoints were maximum sensory block level (MSBL); time to reach MSBL, time to first urination, time to first analgesic consumption and pain severity at the time of first mobilization. Demographic characteristics were similar in both groups (p > 0.05). MSBL and time to reach T12 sensory level were similar in both groups (p > 0.05). Time to reach L2 regression, complete motor block regression, and time to first micturition were significantly shorter; time to first analgesic consumption was significantly longer; and total analgesic consumption and severity of pain at time of first mobilization were significantly lower in Group FS (p < 0.05). The findings of the current study suggest that addition of single-shot femoral block to low dose spinal anesthesia could be an alternative to conventional dose spinal anesthesia in outpatient arthroscopic meniscus repair. NCT02322372.
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
47 CFR 1.1603 - Conduct of random selection.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Conduct of random selection. 1.1603 Section 1.1603 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1603 Conduct of random selection. The...
Wampler, Peter J; Rediske, Richard R; Molla, Azizur R
2013-01-18
A remote sensing technique was developed which combines a Geographic Information System (GIS); Google Earth, and Microsoft Excel to identify home locations for a random sample of households in rural Haiti. The method was used to select homes for ethnographic and water quality research in a region of rural Haiti located within 9 km of a local hospital and source of health education in Deschapelles, Haiti. The technique does not require access to governmental records or ground based surveys to collect household location data and can be performed in a rapid, cost-effective manner. The random selection of households and the location of these households during field surveys were accomplished using GIS, Google Earth, Microsoft Excel, and handheld Garmin GPSmap 76CSx GPS units. Homes were identified and mapped in Google Earth, exported to ArcMap 10.0, and a random list of homes was generated using Microsoft Excel which was then loaded onto handheld GPS units for field location. The development and use of a remote sensing method was essential to the selection and location of random households. A total of 537 homes initially were mapped and a randomized subset of 96 was identified as potential survey locations. Over 96% of the homes mapped using Google Earth imagery were correctly identified as occupied dwellings. Only 3.6% of the occupants of mapped homes visited declined to be interviewed. 16.4% of the homes visited were not occupied at the time of the visit due to work away from the home or market days. A total of 55 households were located using this method during the 10 days of fieldwork in May and June of 2012. The method used to generate and field locate random homes for surveys and water sampling was an effective means of selecting random households in a rural environment lacking geolocation infrastructure. The success rate for locating households using a handheld GPS was excellent and only rarely was local knowledge required to identify and locate households. This method provides an important technique that can be applied to other developing countries where a randomized study design is needed but infrastructure is lacking to implement more traditional participant selection methods.
Final Report of the Montana Public School Students' Out-of-School Time Study. Research Report.
ERIC Educational Resources Information Center
Astroth, Kirk A.; Haynes, George W.
This paper reports on a study that explored the results of a statewide survey conducted in 21 randomly selected counties in Montana during fall 2000. Within each county, no more than 2 school districts were selected for further study, and within each school district, students in the 5th, 7th, and 9th grades were selected to participate in the…
Bayesian dynamic modeling of time series of dengue disease case counts
López-Quílez, Antonio; Torres-Prieto, Alexander
2017-01-01
The aim of this study is to model the association between weekly time series of dengue case counts and meteorological variables, in a high-incidence city of Colombia, applying Bayesian hierarchical dynamic generalized linear models over the period January 2008 to August 2015. Additionally, we evaluate the model’s short-term performance for predicting dengue cases. The methodology shows dynamic Poisson log link models including constant or time-varying coefficients for the meteorological variables. Calendar effects were modeled using constant or first- or second-order random walk time-varying coefficients. The meteorological variables were modeled using constant coefficients and first-order random walk time-varying coefficients. We applied Markov Chain Monte Carlo simulations for parameter estimation, and deviance information criterion statistic (DIC) for model selection. We assessed the short-term predictive performance of the selected final model, at several time points within the study period using the mean absolute percentage error. The results showed the best model including first-order random walk time-varying coefficients for calendar trend and first-order random walk time-varying coefficients for the meteorological variables. Besides the computational challenges, interpreting the results implies a complete analysis of the time series of dengue with respect to the parameter estimates of the meteorological effects. We found small values of the mean absolute percentage errors at one or two weeks out-of-sample predictions for most prediction points, associated with low volatility periods in the dengue counts. We discuss the advantages and limitations of the dynamic Poisson models for studying the association between time series of dengue disease and meteorological variables. The key conclusion of the study is that dynamic Poisson models account for the dynamic nature of the variables involved in the modeling of time series of dengue disease, producing useful models for decision-making in public health. PMID:28671941
Ghayab, Hadi Ratham Al; Li, Yan; Abdulla, Shahab; Diykh, Mohammed; Wan, Xiangkui
2016-06-01
Electroencephalogram (EEG) signals are used broadly in the medical fields. The main applications of EEG signals are the diagnosis and treatment of diseases such as epilepsy, Alzheimer, sleep problems and so on. This paper presents a new method which extracts and selects features from multi-channel EEG signals. This research focuses on three main points. Firstly, simple random sampling (SRS) technique is used to extract features from the time domain of EEG signals. Secondly, the sequential feature selection (SFS) algorithm is applied to select the key features and to reduce the dimensionality of the data. Finally, the selected features are forwarded to a least square support vector machine (LS_SVM) classifier to classify the EEG signals. The LS_SVM classifier classified the features which are extracted and selected from the SRS and the SFS. The experimental results show that the method achieves 99.90, 99.80 and 100 % for classification accuracy, sensitivity and specificity, respectively.
He, Jie; Zhao, Yunfeng; Zhao, Jingli; Gao, Jin; Han, Dandan; Xu, Pao; Yang, Runqing
2017-11-02
Because of their high economic importance, growth traits in fish are under continuous improvement. For growth traits that are recorded at multiple time-points in life, the use of univariate and multivariate animal models is limited because of the variable and irregular timing of these measures. Thus, the univariate random regression model (RRM) was introduced for the genetic analysis of dynamic growth traits in fish breeding. We used a multivariate random regression model (MRRM) to analyze genetic changes in growth traits recorded at multiple time-point of genetically-improved farmed tilapia. Legendre polynomials of different orders were applied to characterize the influences of fixed and random effects on growth trajectories. The final MRRM was determined by optimizing the univariate RRM for the analyzed traits separately via penalizing adaptively the likelihood statistical criterion, which is superior to both the Akaike information criterion and the Bayesian information criterion. In the selected MRRM, the additive genetic effects were modeled by Legendre polynomials of three orders for body weight (BWE) and body length (BL) and of two orders for body depth (BD). By using the covariance functions of the MRRM, estimated heritabilities were between 0.086 and 0.628 for BWE, 0.155 and 0.556 for BL, and 0.056 and 0.607 for BD. Only heritabilities for BD measured from 60 to 140 days of age were consistently higher than those estimated by the univariate RRM. All genetic correlations between growth time-points exceeded 0.5 for either single or pairwise time-points. Moreover, correlations between early and late growth time-points were lower. Thus, for phenotypes that are measured repeatedly in aquaculture, an MRRM can enhance the efficiency of the comprehensive selection for BWE and the main morphological traits.
NASA Astrophysics Data System (ADS)
Shea, Thomas; Krimer, Daniel; Costa, Fidel; Hammer, Julia
2014-05-01
One of the achievements in recent years in volcanology is the determination of time-scales of magmatic processes via diffusion in minerals and its addition to the petrologists' and volcanologists' toolbox. The method typically requires one-dimensional modeling of randomly cut crystals from two-dimensional thin sections. Here we address the question whether using 1D (traverse) or 2D (surface) datasets exploited from randomly cut 3D crystals introduces a bias or dispersion in the time-scales estimated, and how this error can be improved or eliminated. Computational simulations were performed using a concentration-dependent, finite-difference solution to the diffusion equation in 3D. The starting numerical models involved simple geometries (spheres, parallelepipeds), Mg/Fe zoning patterns (either normal or reverse), and isotropic diffusion coefficients. Subsequent models progressively incorporated more complexity, 3D olivines possessing representative polyhedral morphologies, diffusion anisotropy along the different crystallographic axes, and more intricate core-rim zoning patterns. Sections and profiles used to compare 1, 2 and 3D diffusion models were selected to be (1) parallel to the crystal axes, (2) randomly oriented but passing through the olivine center, or (3) randomly oriented and sectioned. Results show that time-scales estimated on randomly cut traverses (1D) or surfaces (2D) can be widely distributed around the actual durations of 3D diffusion (~0.2 to 10 times the true diffusion time). The magnitude over- or underestimations of duration are a complex combination of the geometry of the crystal, the zoning pattern, the orientation of the cuts with respect to the crystallographic axes, and the degree of diffusion anisotropy. Errors on estimated time-scales retrieved from such models may thus be significant. Drastic reductions in the uncertainty of calculated diffusion times can be obtained by following some simple guidelines during the course of data collection (i.e. selection of crystals and concentration profiles, acquisition of crystallographic orientation data), thus allowing derivation of robust time-scales.
Random phase detection in multidimensional NMR.
Maciejewski, Mark W; Fenwick, Matthew; Schuyler, Adam D; Stern, Alan S; Gorbatyuk, Vitaliy; Hoch, Jeffrey C
2011-10-04
Despite advances in resolution accompanying the development of high-field superconducting magnets, biomolecular applications of NMR require multiple dimensions in order to resolve individual resonances, and the achievable resolution is typically limited by practical constraints on measuring time. In addition to the need for measuring long evolution times to obtain high resolution, the need to distinguish the sign of the frequency constrains the ability to shorten measuring times. Sign discrimination is typically accomplished by sampling the signal with two different receiver phases or by selecting a reference frequency outside the range of frequencies spanned by the signal and then sampling at a higher rate. In the parametrically sampled (indirect) time dimensions of multidimensional NMR experiments, either method imposes an additional factor of 2 sampling burden for each dimension. We demonstrate that by using a single detector phase at each time sample point, but randomly altering the phase for different points, the sign ambiguity that attends fixed single-phase detection is resolved. Random phase detection enables a reduction in experiment time by a factor of 2 for each indirect dimension, amounting to a factor of 8 for a four-dimensional experiment, albeit at the cost of introducing sampling artifacts. Alternatively, for fixed measuring time, random phase detection can be used to double resolution in each indirect dimension. Random phase detection is complementary to nonuniform sampling methods, and their combination offers the potential for additional benefits. In addition to applications in biomolecular NMR, random phase detection could be useful in magnetic resonance imaging and other signal processing contexts.
Schwab, C R; Baas, T J; Stalder, K J; Nettleton, D
2009-09-01
A study was conducted to evaluate the efficacy of selection for intramuscular fat (IMF) in a population of purebred Duroc swine using real-time ultrasound. Forty gilts were purchased from US breeders and randomly mated for 2 generations to boars available in regional boar studs, resulting in a base population of 56 litters. Littermate pairs of gilts from this population were randomly assigned to a select line (SL) or control line (CL) and mated to the same sire to establish genetic ties between lines. At an average BW of 114 kg, a minimum of 4 longitudinal ultrasound images were collected 7 cm off-midline across the 10th to 13th ribs of all pigs for the prediction of IMF (UIMF). At least 1 barrow or gilt was slaughtered from each litter, and carcass data were collected. A sample of the LM from the 10th to 11th rib interface was analyzed for carcass IMF (CIMF). Breeding values for IMF were estimated by fitting a 2-trait (UIMF and CIMF) animal model in MATVEC. In the SL, selection in each subsequent generation was based on EBV for IMF with the top 10 boars and top 75 gilts used to produce the next generation. One boar from each sire family and 50 to 60 gilts representing all sire families were randomly selected to maintain the CL. Through 6 generations of selection, an 88% improvement in IMF has been realized (4.53% in SL vs. 2.41% in CL). Results of this study revealed no significant correlated responses in measures of growth performance. However, 6 generations of selection for IMF have yielded correlated effects of decreased loin muscle area and increased backfat. Additionally, the SL obtained more desirable objective measures of tenderness and sensory evaluations of flavor and off-flavor. Meat quality characteristics of pH, water holding capacity, and percent cooking loss were not significantly affected by selection for IMF. Selection for IMF using real-time ultrasound is effective but may be associated with genetic ramifications for carcass composition traits. Intramuscular fat may be used in purebred Duroc swine breeding programs as an indicator trait for sensory traits that influence consumer acceptance; however, rapid improvement should not be expected when simultaneous improvement in other trait categories is also pursued.
A Random Walk Approach to Query Informative Constraints for Clustering.
Abin, Ahmad Ali
2017-08-09
This paper presents a random walk approach to the problem of querying informative constraints for clustering. The proposed method is based on the properties of the commute time, that is the expected time taken for a random walk to travel between two nodes and return, on the adjacency graph of data. Commute time has the nice property of that, the more short paths connect two given nodes in a graph, the more similar those nodes are. Since computing the commute time takes the Laplacian eigenspectrum into account, we use this property in a recursive fashion to query informative constraints for clustering. At each recursion, the proposed method constructs the adjacency graph of data and utilizes the spectral properties of the commute time matrix to bipartition the adjacency graph. Thereafter, the proposed method benefits from the commute times distance on graph to query informative constraints between partitions. This process iterates for each partition until the stop condition becomes true. Experiments on real-world data show the efficiency of the proposed method for constraints selection.
A Faculty Development Needs Assessment of Noncredit Instruction.
ERIC Educational Resources Information Center
Sorcinelli, Mary Deane; Willis, Barry
Perceptions of Indiana University teachers of noncredit courses for adults and implications for faculty development programming were assessed. Of the 26 randomly selected instructors from the nine regional campuses, 73 percent identified their full-time occupation as being business-related, 19 percent were part- or full-time faculty, and 8 percent…
Black-Box System Testing of Real-Time Embedded Systems Using Random and Search-Based Testing
NASA Astrophysics Data System (ADS)
Arcuri, Andrea; Iqbal, Muhammad Zohaib; Briand, Lionel
Testing real-time embedded systems (RTES) is in many ways challenging. Thousands of test cases can be potentially executed on an industrial RTES. Given the magnitude of testing at the system level, only a fully automated approach can really scale up to test industrial RTES. In this paper we take a black-box approach and model the RTES environment using the UML/MARTE international standard. Our main motivation is to provide a more practical approach to the model-based testing of RTES by allowing system testers, who are often not familiar with the system design but know the application domain well-enough, to model the environment to enable test automation. Environment models can support the automation of three tasks: the code generation of an environment simulator, the selection of test cases, and the evaluation of their expected results (oracles). In this paper, we focus on the second task (test case selection) and investigate three test automation strategies using inputs from UML/MARTE environment models: Random Testing (baseline), Adaptive Random Testing, and Search-Based Testing (using Genetic Algorithms). Based on one industrial case study and three artificial systems, we show how, in general, no technique is better than the others. Which test selection technique to use is determined by the failure rate (testing stage) and the execution time of test cases. Finally, we propose a practical process to combine the use of all three test strategies.
FIBER OPTICS. ACOUSTOOPTICS: Compression of random pulses in fiber waveguides
NASA Astrophysics Data System (ADS)
Aleshkevich, Viktor A.; Kozhoridze, G. D.
1990-07-01
An investigation is made of the compression of randomly modulated signal + noise pulses during their propagation in a fiber waveguide. An allowance is made for a cubic nonlinearity and quadratic dispersion. The relationships governing the kinetics of transformation of the time envelope, and those which determine the duration and intensity of a random pulse are derived. The expressions for the optimal length of a fiber waveguide and for the maximum degree of compression are compared with the available data for regular pulses and the recommendations on selection of the optimal parameters are given.
Altstein, L.; Li, G.
2012-01-01
Summary This paper studies a semiparametric accelerated failure time mixture model for estimation of a biological treatment effect on a latent subgroup of interest with a time-to-event outcome in randomized clinical trials. Latency is induced because membership is observable in one arm of the trial and unidentified in the other. This method is useful in randomized clinical trials with all-or-none noncompliance when patients in the control arm have no access to active treatment and in, for example, oncology trials when a biopsy used to identify the latent subgroup is performed only on subjects randomized to active treatment. We derive a computational method to estimate model parameters by iterating between an expectation step and a weighted Buckley-James optimization step. The bootstrap method is used for variance estimation, and the performance of our method is corroborated in simulation. We illustrate our method through an analysis of a multicenter selective lymphadenectomy trial for melanoma. PMID:23383608
Takakusagi, Yoichi; Kuramochi, Kouji; Takagi, Manami; Kusayanagi, Tomoe; Manita, Daisuke; Ozawa, Hiroko; Iwakiri, Kanako; Takakusagi, Kaori; Miyano, Yuka; Nakazaki, Atsuo; Kobayashi, Susumu; Sugawara, Fumio; Sakaguchi, Kengo
2008-11-15
Here, we report an efficient one-cycle affinity selection using a natural-protein or random-peptide T7 phage pool for identification of binding proteins or peptides specific for small-molecules. The screening procedure involved a cuvette type 27-MHz quartz-crystal microbalance (QCM) apparatus with introduction of self-assembled monolayer (SAM) for a specific small-molecule immobilization on the gold electrode surface of a sensor chip. Using this apparatus, we attempted an affinity selection of proteins or peptides against synthetic ligand for FK506-binding protein (SLF) or irinotecan (Iri, CPT-11). An affinity selection using SLF-SAM and a natural-protein T7 phage pool successfully detected FK506-binding protein 12 (FKBP12)-displaying T7 phage after an interaction time of only 10 min. Extensive exploration of time-consuming wash and/or elution conditions together with several rounds of selection was not required. Furthermore, in the selection using a 15-mer random-peptide T7 phage pool and subsequent analysis utilizing receptor ligand contact (RELIC) software, a subset of SLF-selected peptides clearly pinpointed several amino-acid residues within the binding site of FKBP12. Likewise, a subset of Iri-selected peptides pinpointed part of the positive amino-acid region of residues from the Iri-binding site of the well-known direct targets, acetylcholinesterase (AChE) and carboxylesterase (CE). Our findings demonstrate the effectiveness of this method and general applicability for a wide range of small-molecules.
Listeners modulate temporally selective attention during natural speech processing
Astheimer, Lori B.; Sanders, Lisa D.
2009-01-01
Spatially selective attention allows for the preferential processing of relevant stimuli when more information than can be processed in detail is presented simultaneously at distinct locations. Temporally selective attention may serve a similar function during speech perception by allowing listeners to allocate attentional resources to time windows that contain highly relevant acoustic information. To test this hypothesis, event-related potentials were compared in response to attention probes presented in six conditions during a narrative: concurrently with word onsets, beginning 50 and 100 ms before and after word onsets, and at random control intervals. Times for probe presentation were selected such that the acoustic environments of the narrative were matched for all conditions. Linguistic attention probes presented at and immediately following word onsets elicited larger amplitude N1s than control probes over medial and anterior regions. These results indicate that native speakers selectively process sounds presented at specific times during normal speech perception. PMID:18395316
NASA Astrophysics Data System (ADS)
Sirait, Kamson; Tulus; Budhiarti Nababan, Erna
2017-12-01
Clustering methods that have high accuracy and time efficiency are necessary for the filtering process. One method that has been known and applied in clustering is K-Means Clustering. In its application, the determination of the begining value of the cluster center greatly affects the results of the K-Means algorithm. This research discusses the results of K-Means Clustering with starting centroid determination with a random and KD-Tree method. The initial determination of random centroid on the data set of 1000 student academic data to classify the potentially dropout has a sse value of 952972 for the quality variable and 232.48 for the GPA, whereas the initial centroid determination by KD-Tree has a sse value of 504302 for the quality variable and 214,37 for the GPA variable. The smaller sse values indicate that the result of K-Means Clustering with initial KD-Tree centroid selection have better accuracy than K-Means Clustering method with random initial centorid selection.
DOT National Transportation Integrated Search
1975-02-01
Randomly selected drivers were stopped at times and places of previous fatal crashes in Lincoln, Nebraska, and Dade County (Miami), Florida. Breath, urnine, blood, and lip swab samples were requested, for later analysis for drugs and medications. A c...
Detecting evolutionary forces in language change.
Newberry, Mitchell G; Ahern, Christopher A; Clark, Robin; Plotkin, Joshua B
2017-11-09
Both language and genes evolve by transmission over generations with opportunity for differential replication of forms. The understanding that gene frequencies change at random by genetic drift, even in the absence of natural selection, was a seminal advance in evolutionary biology. Stochastic drift must also occur in language as a result of randomness in how linguistic forms are copied between speakers. Here we quantify the strength of selection relative to stochastic drift in language evolution. We use time series derived from large corpora of annotated texts dating from the 12th to 21st centuries to analyse three well-known grammatical changes in English: the regularization of past-tense verbs, the introduction of the periphrastic 'do', and variation in verbal negation. We reject stochastic drift in favour of selection in some cases but not in others. In particular, we infer selection towards the irregular forms of some past-tense verbs, which is likely driven by changing frequencies of rhyming patterns over time. We show that stochastic drift is stronger for rare words, which may explain why rare forms are more prone to replacement than common ones. This work provides a method for testing selective theories of language change against a null model and reveals an underappreciated role for stochasticity in language evolution.
van den Bogert, Cornelis A.; van Soest-Poortvliet, Mirjam C.; Fazeli Farsani, Soulmaz; Otten, René H. J.; ter Riet, Gerben; Bouter, Lex M.
2018-01-01
Background Selective reporting is wasteful, leads to bias in the published record and harms the credibility of science. Studies on potential determinants of selective reporting currently lack a shared taxonomy and a causal framework. Objective To develop a taxonomy of determinants of selective reporting in science. Design Inductive qualitative content analysis of a random selection of the pertinent literature including empirical research and theoretical reflections. Methods Using search terms for bias and selection combined with terms for reporting and publication, we systematically searched the PubMed, Embase, PsycINFO and Web of Science databases up to January 8, 2015. Of the 918 articles identified, we screened a 25 percent random selection. From eligible articles, we extracted phrases that mentioned putative or possible determinants of selective reporting, which we used to create meaningful categories. We stopped when no new categories emerged in the most recently analyzed articles (saturation). Results Saturation was reached after analyzing 64 articles. We identified 497 putative determinants, of which 145 (29%) were supported by empirical findings. The determinants represented 12 categories (leaving 3% unspecified): focus on preferred findings (36%), poor or overly flexible research design (22%), high-risk area and its development (8%), dependence upon sponsors (8%), prejudice (7%), lack of resources including time (3%), doubts about reporting being worth the effort (3%), limitations in reporting and editorial practices (3%), academic publication system hurdles (3%), unfavorable geographical and regulatory environment (2%), relationship and collaboration issues (2%), and potential harm (0.4%). Conclusions We designed a taxonomy of putative determinants of selective reporting consisting of 12 categories. The taxonomy may help develop theory about causes of selection bias and guide policies to prevent selective reporting. PMID:29401492
Changing dietary patterns and body mass index over time in Canadian Inuit communities.
Sheikh, Nelofar; Egeland, Grace M; Johnson-Down, Louise; Kuhnlein, Harriet V
2011-01-01
The International Polar Year (IPY) Inuit Health Survey provided an opportunity to compare dietary and body mass index (BMI) data with data collected a decade earlier for the same communities. A dietary survey included 1,929 randomly selected participants aged 15 years or older, selected from 18 Inuit communities in 1998-1999. The IPY survey included 2,595 randomly selected participants aged 18 years or older, selected from 36 Inuit communities in 2007-2008. Data from the same 18 communities included in both surveys were compared for adults 20 years and older. Twenty-four-hour dietary recall data were analysed to assess the percentage of energy from traditional and market foods by sex and age groups. Body mass index (BMI) was assessed to establish the prevalence of obesity by sex and age groups in both surveys. There was a significant decrease (p≤0.05) in energy contribution from traditional food and a significant increase in market food consumption over time. Sugar-sweetened beverages, chips and pasta all increased as percentages of energy. BMI increased overall for women and for each age stratum evaluated (p<0.05). The nutrition transition continues in the Canadian Arctic with a concurrent increase in BMI.
NASA Astrophysics Data System (ADS)
Thanos, Konstantinos-Georgios; Thomopoulos, Stelios C. A.
2016-05-01
wayGoo is a fully functional application whose main functionalities include content geolocation, event scheduling, and indoor navigation. However, significant information about events do not reach users' attention, either because of the size of this information or because some information comes from real - time data sources. The purpose of this work is to facilitate event management operations by prioritizing the presented events, based on users' interests using both, static and real - time data. Through the wayGoo interface, users select conceptual topics that are interesting for them. These topics constitute a browsing behavior vector which is used for learning users' interests implicitly, without being intrusive. Then, the system estimates user preferences and return an events list sorted from the most preferred one to the least. User preferences are modeled via a Naïve Bayesian Network which consists of: a) the `decision' random variable corresponding to users' decision on attending an event, b) the `distance' random variable, modeled by a linear regression that estimates the probability that the distance between a user and each event destination is not discouraging, ` the seat availability' random variable, modeled by a linear regression, which estimates the probability that the seat availability is encouraging d) and the `relevance' random variable, modeled by a clustering - based collaborative filtering, which determines the relevance of each event users' interests. Finally, experimental results show that the proposed system contribute essentially to assisting users in browsing and selecting events to attend.
Farmers as Consumers of Agricultural Education Services: Willingness to Pay and Spend Time
ERIC Educational Resources Information Center
Charatsari, Chrysanthi; Papadaki-Klavdianou, Afroditi; Michailidis, Anastasios
2011-01-01
This study assessed farmers' willingness to pay for and spend time attending an Agricultural Educational Program (AEP). Primary data on the demographic and socio-economic variables of farmers were collected from 355 farmers selected randomly from Northern Greece. Descriptive statistics and multivariate analysis methods were used in order to meet…
[Corifollitropin alfa in women stimulated for the first time in in vitro fertilization programme].
Vraná-Mardešićová, N; Vobořil, J; Melicharová, L; Jelínková, L; Vilímová, Š; Mardešić, T
2017-01-01
To compare results after stimulation with corifollitropin alfa (Elonva) in unselected group of women entering for the first time in in vitro fertilization programme (IVF) with results from Phase III randomized trials with selected groups of women. Prospective study. Sanatorium Pronatal, Praha. 40 unselected women with adequat ovarian reserve entering for the first time in IVF programme were stimulated with corifollitropin alfa and GnRH antagonists. Avarage age in the study group was 32,8 years (29-42 years), women younger then 36 and less then 60 kg received Elonva 100 µg , all others (age > 36 let, weight > 60 kg) Elonva 150 µg. Five days after egg retrieval one blastocyst was transferred (single embryo transfer - eSET). Our results were compared with the resuls in higly selected groups of women from Phase III randomized trials. After stimulation with corifollitropin alfa and GnRH antagonists on average 10,6 (9,2 ± 4,2) eggs could be retrieved, among them 7,3 (6,6 ± 3,9) were M II oocytes (68,9%) and fertilisation rate was 84,6%. After first embryo transfer ("fresh" embryos and embryos from "freeze all" cycles) 14 pregnancies were achieved (37,8%), three pregnancies were achieved later from transfer of frozen-thawed embryos (cumulative pregnancy rate 45,9%). There were three abortions. No severe hyperstimulation syndrom occured. Our results in unselected group of women stimulated for the first in an IVF programme with corifollitropin alfa are fully comparable with results published in randomized trials with selected group of patiens. Corifollitropin alfa in combination with daily GnRH antagonist can be successfully used in normal-responder patients stimulated for the first time in an IVF programmeKeywords: corifollitropin alfa, GnRH antagonists, ovarian stimulation, pregnancy.
Selection of examples in case-based computer-aided decision systems
Mazurowski, Maciej A.; Zurada, Jacek M.; Tourassi, Georgia D.
2013-01-01
Case-based computer-aided decision (CB-CAD) systems rely on a database of previously stored, known examples when classifying new, incoming queries. Such systems can be particularly useful since they do not need retraining every time a new example is deposited in the case base. The adaptive nature of case-based systems is well suited to the current trend of continuously expanding digital databases in the medical domain. To maintain efficiency, however, such systems need sophisticated strategies to effectively manage the available evidence database. In this paper, we discuss the general problem of building an evidence database by selecting the most useful examples to store while satisfying existing storage requirements. We evaluate three intelligent techniques for this purpose: genetic algorithm-based selection, greedy selection and random mutation hill climbing. These techniques are compared to a random selection strategy used as the baseline. The study is performed with a previously presented CB-CAD system applied for false positive reduction in screening mammograms. The experimental evaluation shows that when the development goal is to maximize the system’s diagnostic performance, the intelligent techniques are able to reduce the size of the evidence database to 37% of the original database by eliminating superfluous and/or detrimental examples while at the same time significantly improving the CAD system’s performance. Furthermore, if the case-base size is a main concern, the total number of examples stored in the system can be reduced to only 2–4% of the original database without a decrease in the diagnostic performance. Comparison of the techniques shows that random mutation hill climbing provides the best balance between the diagnostic performance and computational efficiency when building the evidence database of the CB-CAD system. PMID:18854606
Secure distance ranging by electronic means
Gritton, Dale G.
1992-01-01
A system for secure distance ranging between a reader 11 and a tag 12 wherein the distance between the two is determined by the time it takes to propagate a signal from the reader to the tag and for a responsive signal to return, and in which such time is random and unpredictable, except to the reader, even though the distance between the reader and tag remains the same. A random number (19) is sent from the reader and encrypted (26) by the tag into a number having 16 segments of 4 bits each (28). A first tag signal (31) is sent after such encryption. In response, a random width start pulse (13) is generated by the reader. When received in the tag, the width of the start pulse is measured (41) in the tag and a segment of the encrypted number is selected (42) in accordance with such width. A second tag pulse is generated at a time T after the start pulse arrives at the tag, the time T being dependent on the length of a variable time delay t.sub.v which is determined by the value of the bits in the selected segment of the encrypted number. At the reader, the total time from the beginning of the start pulse to the receipt of the second tag signal is measured (36, 37). The value of t.sub.v (21, 22, 23, 34) is known at the reader and the time T is subtracted (46) from the total time to find the actual propagation t.sub.p for signals to travel between the reader 11 and tag 12. The propagation time is then converted into distance (46).
Dual Fractal Dimension and Long-Range Correlation of Chinese Stock Prices
NASA Astrophysics Data System (ADS)
Chen, Chaoshi; Wang, Lei
2012-03-01
The recently developed modified inverse random midpoint displacement (mIRMD) and conventional detrended fluctuation analysis (DFA) algorithms are used to analyze the tick-by-tick high-frequency time series of Chinese A-share stock prices and indexes. A dual-fractal structure with a crossover at about 10 min is observed. The majority of the selected time series show visible persistence within this time threshold, but approach a random walk on a longer time scale. The phenomenon is found to be industry-dependent, i.e., the crossover is much more prominent for stocks belonging to cyclical industries than for those belonging to noncyclical (defensive) industries. We have also shown that the sign series show a similar dual-fractal structure, while like generally found, the magnitude series show a much longer time persistence.
Comparative study of feature selection with ensemble learning using SOM variants
NASA Astrophysics Data System (ADS)
Filali, Ameni; Jlassi, Chiraz; Arous, Najet
2017-03-01
Ensemble learning has succeeded in the growth of stability and clustering accuracy, but their runtime prohibits them from scaling up to real-world applications. This study deals the problem of selecting a subset of the most pertinent features for every cluster from a dataset. The proposed method is another extension of the Random Forests approach using self-organizing maps (SOM) variants to unlabeled data that estimates the out-of-bag feature importance from a set of partitions. Every partition is created using a various bootstrap sample and a random subset of the features. Then, we show that the process internal estimates are used to measure variable pertinence in Random Forests are also applicable to feature selection in unsupervised learning. This approach aims to the dimensionality reduction, visualization and cluster characterization at the same time. Hence, we provide empirical results on nineteen benchmark data sets indicating that RFS can lead to significant improvement in terms of clustering accuracy, over several state-of-the-art unsupervised methods, with a very limited subset of features. The approach proves promise to treat with very broad domains.
Use of Bayes theorem to correct size-specific sampling bias in growth data.
Troynikov, V S
1999-03-01
The bayesian decomposition of posterior distribution was used to develop a likelihood function to correct bias in the estimates of population parameters from data collected randomly with size-specific selectivity. Positive distributions with time as a parameter were used for parametrization of growth data. Numerical illustrations are provided. The alternative applications of the likelihood to estimate selectivity parameters are discussed.
Armijo-Olivo, Susan; Cummings, Greta G.; Amin, Maryam; Flores-Mir, Carlos
2017-01-01
Objectives To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time. Methods We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics. Results Sequence generation was assessed to be inadequate (at unclear or high risk of bias) in 68% (n = 367) of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%). Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154) and 40.5% (n = 219) of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427) of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95) of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%), while the method of blinding was appropriate in 53% (n = 286) of the trials. We identified a significant decrease over time (1955–2013) in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05) in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias) in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias. Conclusions The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent methodology and detailed reporting of trials are still needed. PMID:29272315
Saltaji, Humam; Armijo-Olivo, Susan; Cummings, Greta G; Amin, Maryam; Flores-Mir, Carlos
2017-01-01
To examine the risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions and the development of these aspects over time. We included 540 randomized clinical trials from 64 selected systematic reviews. We extracted, in duplicate, details from each of the selected randomized clinical trials with respect to publication and trial characteristics, reporting and methodologic characteristics, and Cochrane risk of bias domains. We analyzed data using logistic regression and Chi-square statistics. Sequence generation was assessed to be inadequate (at unclear or high risk of bias) in 68% (n = 367) of the trials, while allocation concealment was inadequate in the majority of trials (n = 464; 85.9%). Blinding of participants and blinding of the outcome assessment were judged to be inadequate in 28.5% (n = 154) and 40.5% (n = 219) of the trials, respectively. A sample size calculation before the initiation of the study was not performed/reported in 79.1% (n = 427) of the trials, while the sample size was assessed as adequate in only 17.6% (n = 95) of the trials. Two thirds of the trials were not described as double blinded (n = 358; 66.3%), while the method of blinding was appropriate in 53% (n = 286) of the trials. We identified a significant decrease over time (1955-2013) in the proportion of trials assessed as having inadequately addressed methodological quality items (P < 0.05) in 30 out of the 40 quality criteria, or as being inadequate (at high or unclear risk of bias) in five domains of the Cochrane risk of bias tool: sequence generation, allocation concealment, incomplete outcome data, other sources of bias, and overall risk of bias. The risks of bias, risks of random errors, reporting quality, and methodological quality of randomized clinical trials of oral health interventions have improved over time; however, further efforts that contribute to the development of more stringent methodology and detailed reporting of trials are still needed.
Reducing seed dependent variability of non-uniformly sampled multidimensional NMR data
NASA Astrophysics Data System (ADS)
Mobli, Mehdi
2015-07-01
The application of NMR spectroscopy to study the structure, dynamics and function of macromolecules requires the acquisition of several multidimensional spectra. The one-dimensional NMR time-response from the spectrometer is extended to additional dimensions by introducing incremented delays in the experiment that cause oscillation of the signal along "indirect" dimensions. For a given dimension the delay is incremented at twice the rate of the maximum frequency (Nyquist rate). To achieve high-resolution requires acquisition of long data records sampled at the Nyquist rate. This is typically a prohibitive step due to time constraints, resulting in sub-optimal data records to the detriment of subsequent analyses. The multidimensional NMR spectrum itself is typically sparse, and it has been shown that in such cases it is possible to use non-Fourier methods to reconstruct a high-resolution multidimensional spectrum from a random subset of non-uniformly sampled (NUS) data. For a given acquisition time, NUS has the potential to improve the sensitivity and resolution of a multidimensional spectrum, compared to traditional uniform sampling. The improvements in sensitivity and/or resolution achieved by NUS are heavily dependent on the distribution of points in the random subset acquired. Typically, random points are selected from a probability density function (PDF) weighted according to the NMR signal envelope. In extreme cases as little as 1% of the data is subsampled. The heavy under-sampling can result in poor reproducibility, i.e. when two experiments are carried out where the same number of random samples is selected from the same PDF but using different random seeds. Here, a jittered sampling approach is introduced that is shown to improve random seed dependent reproducibility of multidimensional spectra generated from NUS data, compared to commonly applied NUS methods. It is shown that this is achieved due to the low variability of the inherent sensitivity of the random subset chosen from a given PDF. Finally, it is demonstrated that metrics used to find optimal NUS distributions are heavily dependent on the inherent sensitivity of the random subset, and such optimisation is therefore less critical when using the proposed sampling scheme.
Fitzgerald, John S; Johnson, LuAnn; Tomkinson, Grant; Stein, Jesse; Roemmich, James N
2018-05-01
Mechanography during the vertical jump may enhance screening and determining mechanistic causes underlying physical performance changes. Utility of jump mechanography for evaluation is limited by scant test-retest reliability data on force-time variables. This study examined the test-retest reliability of eight jump execution variables assessed from mechanography. Thirty-two women (mean±SD: age 20.8 ± 1.3 yr) and 16 men (age 22.1 ± 1.9 yr) attended a familiarization session and two testing sessions, all one week apart. Participants performed two variations of the squat jump with squat depth self-selected and controlled using a goniometer to 80º knee flexion. Test-retest reliability was quantified as the systematic error (using effect size between jumps), random error (using coefficients of variation), and test-retest correlations (using intra-class correlation coefficients). Overall, jump execution variables demonstrated acceptable reliability, evidenced by small systematic errors (mean±95%CI: 0.2 ± 0.07), moderate random errors (mean±95%CI: 17.8 ± 3.7%), and very strong test-retest correlations (range: 0.73-0.97). Differences in random errors between controlled and self-selected protocols were negligible (mean±95%CI: 1.3 ± 2.3%). Jump execution variables demonstrated acceptable reliability, with no meaningful differences between the controlled and self-selected jump protocols. To simplify testing, a self-selected jump protocol can be used to assess force-time variables with negligible impact on measurement error.
ERIC Educational Resources Information Center
Pierce, Thomas B., Jr.; And Others
1990-01-01
A survey assessed time spent in the community and/or on unstructured activities by randomly selected individuals in Intermediate Care Facilities for the Mentally Retarded (ICF/MR) (N=20) or minigroup home settings (N=20). Individuals in ICF/MR homes spent more time in the community with staff and made fewer choices of unstructured activities.…
Real-time measurement of quality during the compaction of subgrade soils.
DOT National Transportation Integrated Search
2012-12-01
Conventional quality control of subgrade soils during their compaction is usually performed by monitoring moisture content and dry density at a few discrete locations. However, randomly selected points do not adequately represent the entire compacted...
An evaluation of flow-stratified sampling for estimating suspended sediment loads
Robert B. Thomas; Jack Lewis
1995-01-01
Abstract - Flow-stratified sampling is a new method for sampling water quality constituents such as suspended sediment to estimate loads. As with selection-at-list-time (SALT) and time-stratified sampling, flow-stratified sampling is a statistical method requiring random sampling, and yielding unbiased estimates of load and variance. It can be used to estimate event...
Measuring Child-Staff Ratios in Child Care Centers: Balancing Effort and Representativeness
ERIC Educational Resources Information Center
Le, Vi-Nhuan; Perlman, Michal; Zellman, Gail L.; Hamilton, Laura S.
2006-01-01
Child-staff ratios are an important quality indicator. They are often collected by observing one randomly selected classroom several times during a 2-h period on a single day. However, it is unclear whether these measures represent the ratios that children actually experience during most of their time in care. This study compared ratio data…
Acceptability of Adaptations for Struggling Writers: A National Survey with Primary-Grade Teachers
ERIC Educational Resources Information Center
Graham, Steve; Harris, Karen R.; Bartlett, Brendan J.; Popadopoulou, Eleni; Santoro, Julia
2016-01-01
One hundred twenty-five primary-grade teachers randomly selected from across the United States indicated how frequently they made 20 instructional adaptations for the struggling writers in their classroom. The measure of frequency ranged from never, several times a year, monthly, weekly, several times a week, and daily. Using a 6-point Likert-type…
NASA Astrophysics Data System (ADS)
Zou, Guang'an; Wang, Qiang; Mu, Mu
2016-09-01
Sensitive areas for prediction of the Kuroshio large meander using a 1.5-layer, shallow-water ocean model were investigated using the conditional nonlinear optimal perturbation (CNOP) and first singular vector (FSV) methods. A series of sensitivity experiments were designed to test the sensitivity of sensitive areas within the numerical model. The following results were obtained: (1) the eff ect of initial CNOP and FSV patterns in their sensitive areas is greater than that of the same patterns in randomly selected areas, with the eff ect of the initial CNOP patterns in CNOP sensitive areas being the greatest; (2) both CNOP- and FSV-type initial errors grow more quickly than random errors; (3) the eff ect of random errors superimposed on the sensitive areas is greater than that of random errors introduced into randomly selected areas, and initial errors in the CNOP sensitive areas have greater eff ects on final forecasts. These results reveal that the sensitive areas determined using the CNOP are more sensitive than those of FSV and other randomly selected areas. In addition, ideal hindcasting experiments were conducted to examine the validity of the sensitive areas. The results indicate that reduction (or elimination) of CNOP-type errors in CNOP sensitive areas at the initial time has a greater forecast benefit than the reduction (or elimination) of FSV-type errors in FSV sensitive areas. These results suggest that the CNOP method is suitable for determining sensitive areas in the prediction of the Kuroshio large-meander path.
Treatment selection in a randomized clinical trial via covariate-specific treatment effect curves.
Ma, Yunbei; Zhou, Xiao-Hua
2017-02-01
For time-to-event data in a randomized clinical trial, we proposed two new methods for selecting an optimal treatment for a patient based on the covariate-specific treatment effect curve, which is used to represent the clinical utility of a predictive biomarker. To select an optimal treatment for a patient with a specific biomarker value, we proposed pointwise confidence intervals for each covariate-specific treatment effect curve and the difference between covariate-specific treatment effect curves of two treatments. Furthermore, to select an optimal treatment for a future biomarker-defined subpopulation of patients, we proposed confidence bands for each covariate-specific treatment effect curve and the difference between each pair of covariate-specific treatment effect curve over a fixed interval of biomarker values. We constructed the confidence bands based on a resampling technique. We also conducted simulation studies to evaluate finite-sample properties of the proposed estimation methods. Finally, we illustrated the application of the proposed method in a real-world data set.
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Evaluation of Two Compressed Air Foam Systems for Culling Caged Layer Hens.
Benson, Eric R; Weiher, Jaclyn A; Alphin, Robert L; Farnell, Morgan; Hougentogler, Daniel P
2018-04-24
Outbreaks of avian influenza (AI) and other highly contagious poultry diseases continue to be a concern for those involved in the poultry industry. In the situation of an outbreak, emergency depopulation of the birds involved is necessary. In this project, two compressed air foam systems (CAFS) were evaluated for mass emergency depopulation of layer hens in a manure belt equipped cage system. In both experiments, a randomized block design was used with multiple commercial layer hens treated with one of three randomly selected depopulation methods: CAFS, CAFS with CO₂ gas, and CO₂ gas. In Experiment 1, a Rowe manufactured CAFS was used, a selection of birds were instrumented, and the time to unconsciousness, brain death, altered terminal cardiac activity and motion cessation were recorded. CAFS with and without CO₂ was faster to unconsciousness, however, the other parameters were not statistically significant. In Experiment 2, a custom Hale based CAFS was used to evaluate the impact of bird age, a selection of birds were instrumented, and the time to motion cessation was recorded. The difference in time to cessation of movement between pullets and spent hens using CAFS was not statistically significant. Both CAFS depopulate caged layers, however, there was no benefit to including CO₂.
Au-Yeung, Stephanie S Y; Wang, Juliana; Chen, Ye; Chua, Eldrich
2014-12-01
The aim of this study was to determine whether transcranial direct current stimulation (tDCS) applied to the primary motor hand area modulates hand dexterity and selective attention after stroke. This study was a double-blind, placebo-controlled, randomized crossover trial involving subjects with chronic stroke. Ten stroke survivors with some pinch strength in the paretic hand received three different tDCS interventions assigned in random order in separate sessions-anodal tDCS targeting the primary motor area of the lesioned hemisphere (M1lesioned), cathodal tDCS applied to the contralateral hemisphere (M1nonlesioned), and sham tDCS-each for 20 mins. The primary outcome measures were Purdue pegboard test scores for hand dexterity and response time in the color-word Stroop test for selective attention. Pinch strength of the paretic hand was the secondary outcome. Cathodal tDCS to M1nonlesioned significantly improved affected hand dexterity (by 1.1 points on the Purdue pegboard unimanual test, P = 0.014) and selective attention (0.6 secs faster response time on the level 3 Stroop interference test for response inhibition, P = 0.017), but not pinch strength. The outcomes were not improved with anodal tDCS to M1lesioned or sham tDCS. Twenty minutes of cathodal tDCS to M1nonlesioned can promote both paretic hand dexterity and selective attention in people with chronic stroke.
Patching, Geoffrey R.; Rahm, Johan; Jansson, Märit; Johansson, Maria
2017-01-01
Accurate assessment of people’s preferences for different outdoor lighting applications is increasingly considered important in the development of new urban environments. Here a new method of random environmental walking is proposed to complement current methods of assessing urban lighting applications, such as self-report questionnaires. The procedure involves participants repeatedly walking between different lighting applications by random selection of a lighting application and preferred choice or by random selection of a lighting application alone. In this manner, participants are exposed to all lighting applications of interest more than once and participants’ preferences for the different lighting applications are reflected in the number of times they walk to each lighting application. On the basis of an initial simulation study, to explore the feasibility of this approach, a comprehensive field test was undertaken. The field test included random environmental walking and collection of participants’ subjective ratings of perceived pleasantness (PP), perceived quality, perceived strength, and perceived flicker of four lighting applications. The results indicate that random environmental walking can reveal participants’ preferences for different lighting applications that, in the present study, conformed to participants’ ratings of PP and perceived quality of the lighting applications. As a complement to subjectively stated environmental preferences, random environmental walking has the potential to expose behavioral preferences for different lighting applications. PMID:28337163
Palmer, Jaclyn Bradley; Lane, Deforia; Mayo, Diane; Schluchter, Mark; Leeming, Rosemary
2015-10-01
To investigate the effect of live and recorded perioperative music therapy on anesthesia requirements, anxiety levels, recovery time, and patient satisfaction in women experiencing surgery for diagnosis or treatment of breast cancer. Between 2012 and 2014, 207 female patients undergoing surgery for potential or known breast cancer were randomly assigned to receive either patient-selected live music (LM) preoperatively with therapist-selected recorded music intraoperatively (n=69), patient-selected recorded music (RM) preoperatively with therapist-selected recorded music intraoperatively (n=70), or usual care (UC) preoperatively with noise-blocking earmuffs intraoperatively (n=68). The LM and the RM groups did not differ significantly from the UC group in the amount of propofol required to reach moderate sedation. Compared with the UC group, both the LM and the RM groups had greater reductions (P<.001) in anxiety scores preoperatively (mean changes [and standard deviation: -30.9 [36.3], -26.8 [29.3], and 0.0 [22.7]), respectively. The LM and RM groups did not differ from the UC group with respect to recovery time; however, the LM group had a shorter recovery time compared with the RM group (a difference of 12.4 minutes; 95% CI, 2.2 to 22.5; P=.018). Satisfaction scores for the LM and RM groups did not differ from those of the UC group. Including music therapy as a complementary modality with cancer surgery may help manage preoperative anxiety in a way that is safe, effective, time-efficient, and enjoyable. © 2015 by American Society of Clinical Oncology.
Lansberg, Maarten G; Bhat, Ninad S; Yeatts, Sharon D; Palesch, Yuko Y; Broderick, Joseph P; Albers, Gregory W; Lai, Tze L; Lavori, Philip W
2016-12-01
Adaptive trial designs that allow enrichment of the study population through subgroup selection can increase the chance of a positive trial when there is a differential treatment effect among patient subgroups. The goal of this study is to illustrate the potential benefit of adaptive subgroup selection in endovascular stroke studies. We simulated the performance of a trial design with adaptive subgroup selection and compared it with that of a traditional design. Outcome data were based on 90-day modified Rankin Scale scores, observed in IMS III (Interventional Management of Stroke III), among patients with a vessel occlusion on baseline computed tomographic angiography (n=382). Patients were categorized based on 2 methods: (1) according to location of the arterial occlusive lesion and onset-to-randomization time and (2) according to onset-to-randomization time alone. The power to demonstrate a treatment benefit was based on 10 000 trial simulations for each design. The treatment effect was relatively homogeneous across categories when patients were categorized based on arterial occlusive lesion and time. Consequently, the adaptive design had similar power (47%) compared with the fixed trial design (45%). There was a differential treatment effect when patients were categorized based on time alone, resulting in greater power with the adaptive design (82%) than with the fixed design (57%). These simulations, based on real-world patient data, indicate that adaptive subgroup selection has merit in endovascular stroke trials as it substantially increases power when the treatment effect differs among subgroups in a predicted pattern. © 2016 American Heart Association, Inc.
Hamdan, Sadeque; Cheaitou, Ali
2017-08-01
This data article provides detailed optimization input and output datasets and optimization code for the published research work titled "Dynamic green supplier selection and order allocation with quantity discounts and varying supplier availability" (Hamdan and Cheaitou, 2017, In press) [1]. Researchers may use these datasets as a baseline for future comparison and extensive analysis of the green supplier selection and order allocation problem with all-unit quantity discount and varying number of suppliers. More particularly, the datasets presented in this article allow researchers to generate the exact optimization outputs obtained by the authors of Hamdan and Cheaitou (2017, In press) [1] using the provided optimization code and then to use them for comparison with the outputs of other techniques or methodologies such as heuristic approaches. Moreover, this article includes the randomly generated optimization input data and the related outputs that are used as input data for the statistical analysis presented in Hamdan and Cheaitou (2017 In press) [1] in which two different approaches for ranking potential suppliers are compared. This article also provides the time analysis data used in (Hamdan and Cheaitou (2017, In press) [1] to study the effect of the problem size on the computation time as well as an additional time analysis dataset. The input data for the time study are generated randomly, in which the problem size is changed, and then are used by the optimization problem to obtain the corresponding optimal outputs as well as the corresponding computation time.
This report is a description of field work and data analysis results comparing a design comparable to systematic site selection with one based on random selection of sites. The report is expected to validate the use of random site selection in the bioassessment program for the O...
A Graph Theory Practice on Transformed Image: A Random Image Steganography
Thanikaiselvan, V.; Arulmozhivarman, P.; Subashanthini, S.; Amirtharajan, Rengarajan
2013-01-01
Modern day information age is enriched with the advanced network communication expertise but unfortunately at the same time encounters infinite security issues when dealing with secret and/or private information. The storage and transmission of the secret information become highly essential and have led to a deluge of research in this field. In this paper, an optimistic effort has been taken to combine graceful graph along with integer wavelet transform (IWT) to implement random image steganography for secure communication. The implementation part begins with the conversion of cover image into wavelet coefficients through IWT and is followed by embedding secret image in the randomly selected coefficients through graph theory. Finally stegoimage is obtained by applying inverse IWT. This method provides a maximum of 44 dB peak signal to noise ratio (PSNR) for 266646 bits. Thus, the proposed method gives high imperceptibility through high PSNR value and high embedding capacity in the cover image due to adaptive embedding scheme and high robustness against blind attack through graph theoretic random selection of coefficients. PMID:24453857
Bronikowski, Michal; Bronikowska, Malgorzata
2011-11-01
In this paper we evaluate the sustainability of changes of involvement in physical activity. The paper examines the effectiveness of a model aiming at influencing the frequency of leisure time physical activity, physical fitness and body constituency in youth. The baseline of this study was a randomly selected sample of 13 year olds who participated in an intervention programme carried out in three schools in Poznan in 2005-08. From a total of 199 adolescent boys a subsample of 38 individuals from the experimental group and 34 from the control group were followed for 15 months after the interventional programme finished. From 170 girls, a subsample of 33 from the experimental group and 32 girls from the control group were also randomly selected for the follow-up study. Among the variables monitored were: physical fitness, body constituency, and frequency of leisure time physical activity. All the variables were monitored in pre-test, post-test and follow-up examinations. It was established that 15 months after the end of the interventional programme boys and girls from the intervention groups maintained a higher level of leisure time physical activity than their control group peers, and similarly in the case of selected health-related components of physical fitness. No distinctive differences were found in the case of body constituency, though, apart from muscle mass and the sum of skinfolds in girls. The study exposed an increase in leisure time physical activity in time and a positive influence on selected components of health-related variables. The findings confirm the effectiveness of a multi-level intervention programme involving self-determined out-of-school physical activity planning for school-age youths, indicating the importance of personal and social context.
Randomizing Roaches: Exploring the "Bugs" of Randomization in Experimental Design
ERIC Educational Resources Information Center
Wagler, Amy; Wagler, Ron
2014-01-01
Understanding the roles of random selection and random assignment in experimental design is a central learning objective in most introductory statistics courses. This article describes an activity, appropriate for a high school or introductory statistics course, designed to teach the concepts, values and pitfalls of random selection and assignment…
Agent Reward Shaping for Alleviating Traffic Congestion
NASA Technical Reports Server (NTRS)
Tumer, Kagan; Agogino, Adrian
2006-01-01
Traffic congestion problems provide a unique environment to study how multi-agent systems promote desired system level behavior. What is particularly interesting in this class of problems is that no individual action is intrinsically "bad" for the system but that combinations of actions among agents lead to undesirable outcomes, As a consequence, agents need to learn how to coordinate their actions with those of other agents, rather than learn a particular set of "good" actions. This problem is ubiquitous in various traffic problems, including selecting departure times for commuters, routes for airlines, and paths for data routers. In this paper we present a multi-agent approach to two traffic problems, where far each driver, an agent selects the most suitable action using reinforcement learning. The agent rewards are based on concepts from collectives and aim to provide the agents with rewards that are both easy to learn and that if learned, lead to good system level behavior. In the first problem, we study how agents learn the best departure times of drivers in a daily commuting environment and how following those departure times alleviates congestion. In the second problem, we study how agents learn to select desirable routes to improve traffic flow and minimize delays for. all drivers.. In both sets of experiments,. agents using collective-based rewards produced near optimal performance (93-96% of optimal) whereas agents using system rewards (63-68%) barely outperformed random action selection (62-64%) and agents using local rewards (48-72%) performed worse than random in some instances.
A comparison of drug use in driver fatalities and similarly exposed drivers
DOT National Transportation Integrated Search
1977-07-01
Author's abstract: Crash information, urine, blood and bile samples from 900 fatally injured drivers were collected by medical examiners in 22 areas of the country. Randomly selected living drivers were interviewed at times and places of recent fatal...
ERIC Educational Resources Information Center
Lapsley, Daniel K.; Daytner, Katrina M.; Kelly, Ken; Maxwell, Scott E.
This large-scale evaluation of Indiana's Prime Time, a funding mechanism designed to reduce class size or pupil-teacher ratio (PTR) in grades K-3 examined the academic performance of nearly 11,000 randomly selected third graders on the state mandated standardized achievement test as a function of class size, PTR, and presence of an instructional…
The Accuracy of Estimated Total Test Statistics. Final Report.
ERIC Educational Resources Information Center
Kleinke, David J.
In a post-mortem study of item sampling, 1,050 examinees were divided into ten groups 50 times. Each time, their papers were scored on four different sets of item samples from a 150-item test of academic aptitude. These samples were selected using (a) unstratified random sampling and stratification on (b) content, (c) difficulty, and (d) both.…
Optimal design of aperiodic, vertical silicon nanowire structures for photovoltaics.
Lin, Chenxi; Povinelli, Michelle L
2011-09-12
We design a partially aperiodic, vertically-aligned silicon nanowire array that maximizes photovoltaic absorption. The optimal structure is obtained using a random walk algorithm with transfer matrix method based electromagnetic forward solver. The optimal, aperiodic structure exhibits a 2.35 times enhancement in ultimate efficiency compared to its periodic counterpart. The spectral behavior mimics that of a periodic array with larger lattice constant. For our system, we find that randomly-selected, aperiodic structures invariably outperform the periodic array.
Wave propagation modeling in composites reinforced by randomly oriented fibers
NASA Astrophysics Data System (ADS)
Kudela, Pawel; Radzienski, Maciej; Ostachowicz, Wieslaw
2018-02-01
A new method for prediction of elastic constants in randomly oriented fiber composites is proposed. It is based on mechanics of composites, the rule of mixtures and total mass balance tailored to the spectral element mesh composed of 3D brick elements. Selected elastic properties predicted by the proposed method are compared with values obtained by another theoretical method. The proposed method is applied for simulation of Lamb waves in glass-epoxy composite plate reinforced by randomly oriented fibers. Full wavefield measurements conducted by the scanning laser Doppler vibrometer are in good agreement with simulations performed by using the time domain spectral element method.
Application of random effects to the study of resource selection by animals
Gillies, C.S.; Hebblewhite, M.; Nielsen, S.E.; Krawchuk, M.A.; Aldridge, Cameron L.; Frair, J.L.; Saher, D.J.; Stevens, C.E.; Jerde, C.L.
2006-01-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence.2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability.3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed.4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects.5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection.6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Application of random effects to the study of resource selection by animals.
Gillies, Cameron S; Hebblewhite, Mark; Nielsen, Scott E; Krawchuk, Meg A; Aldridge, Cameron L; Frair, Jacqueline L; Saher, D Joanne; Stevens, Cameron E; Jerde, Christopher L
2006-07-01
1. Resource selection estimated by logistic regression is used increasingly in studies to identify critical resources for animal populations and to predict species occurrence. 2. Most frequently, individual animals are monitored and pooled to estimate population-level effects without regard to group or individual-level variation. Pooling assumes that both observations and their errors are independent, and resource selection is constant given individual variation in resource availability. 3. Although researchers have identified ways to minimize autocorrelation, variation between individuals caused by differences in selection or available resources, including functional responses in resource selection, have not been well addressed. 4. Here we review random-effects models and their application to resource selection modelling to overcome these common limitations. We present a simple case study of an analysis of resource selection by grizzly bears in the foothills of the Canadian Rocky Mountains with and without random effects. 5. Both categorical and continuous variables in the grizzly bear model differed in interpretation, both in statistical significance and coefficient sign, depending on how a random effect was included. We used a simulation approach to clarify the application of random effects under three common situations for telemetry studies: (a) discrepancies in sample sizes among individuals; (b) differences among individuals in selection where availability is constant; and (c) differences in availability with and without a functional response in resource selection. 6. We found that random intercepts accounted for unbalanced sample designs, and models with random intercepts and coefficients improved model fit given the variation in selection among individuals and functional responses in selection. Our empirical example and simulations demonstrate how including random effects in resource selection models can aid interpretation and address difficult assumptions limiting their generality. This approach will allow researchers to appropriately estimate marginal (population) and conditional (individual) responses, and account for complex grouping, unbalanced sample designs and autocorrelation.
Minimizing Statistical Bias with Queries.
1995-09-14
method for optimally selecting these points would o er enormous savings in time and money. An active learning system will typically attempt to select data...research in active learning assumes that the sec- ond term of Equation 2 is approximately zero, that is, that the learner is unbiased. If this is the case...outperforms the variance- minimizing algorithm and random exploration. and e ective strategy for active learning . I have given empirical evidence that, with
Rib fractures in trauma patients: does operative fixation improve outcome?
Majak, Peter; Næss, Pål A
2016-12-01
Renewed interest in surgical fixation of rib fractures has emerged. However, conservative treatment is still preferred at most surgical departments. We wanted to evaluate whether operative treatment of rib fractures may benefit severely injured patients. Several studies report a reduction in mechanical ventilation time, ICU length of stay (LOS), hospital LOS, pneumonia, need for tracheostomy, pain and costs in operatively treated patients with multiple rib fractures compared with patients treated nonoperatively. Although patient selection and timing of the operation seem crucial for successful outcome, no consensus exists. Mortality reduction has only been shown in a few studies. Most studies are retrospective cohort and case-control studies. Only four randomized control trials exist. Conservative treatment, consisting of respiratory assistance and pain control, is still the treatment of choice in the vast majority of patients with multiple rib fractures. In selected patients, operative fixation of fractured ribs within 72 h postinjury may lead to better outcome. More randomized control trials are needed to further determine who benefits from surgical fixation of rib fractures.
Neustifter, Benjamin; Rathbun, Stephen L; Shiffman, Saul
2012-01-01
Ecological Momentary Assessment is an emerging method of data collection in behavioral research that may be used to capture the times of repeated behavioral events on electronic devices, and information on subjects' psychological states through the electronic administration of questionnaires at times selected from a probability-based design as well as the event times. A method for fitting a mixed Poisson point process model is proposed for the impact of partially-observed, time-varying covariates on the timing of repeated behavioral events. A random frailty is included in the point-process intensity to describe variation among subjects in baseline rates of event occurrence. Covariate coefficients are estimated using estimating equations constructed by replacing the integrated intensity in the Poisson score equations with a design-unbiased estimator. An estimator is also proposed for the variance of the random frailties. Our estimators are robust in the sense that no model assumptions are made regarding the distribution of the time-varying covariates or the distribution of the random effects. However, subject effects are estimated under gamma frailties using an approximate hierarchical likelihood. The proposed approach is illustrated using smoking data.
Greedy Gossip With Eavesdropping
NASA Astrophysics Data System (ADS)
Ustebay, Deniz; Oreshkin, Boris N.; Coates, Mark J.; Rabbat, Michael G.
2010-07-01
This paper presents greedy gossip with eavesdropping (GGE), a novel randomized gossip algorithm for distributed computation of the average consensus problem. In gossip algorithms, nodes in the network randomly communicate with their neighbors and exchange information iteratively. The algorithms are simple and decentralized, making them attractive for wireless network applications. In general, gossip algorithms are robust to unreliable wireless conditions and time varying network topologies. In this paper we introduce GGE and demonstrate that greedy updates lead to rapid convergence. We do not require nodes to have any location information. Instead, greedy updates are made possible by exploiting the broadcast nature of wireless communications. During the operation of GGE, when a node decides to gossip, instead of choosing one of its neighbors at random, it makes a greedy selection, choosing the node which has the value most different from its own. In order to make this selection, nodes need to know their neighbors' values. Therefore, we assume that all transmissions are wireless broadcasts and nodes keep track of their neighbors' values by eavesdropping on their communications. We show that the convergence of GGE is guaranteed for connected network topologies. We also study the rates of convergence and illustrate, through theoretical bounds and numerical simulations, that GGE consistently outperforms randomized gossip and performs comparably to geographic gossip on moderate-sized random geometric graph topologies.
Zhang, Haixia; Zhao, Junkang; Gu, Caijiao; Cui, Yan; Rong, Huiying; Meng, Fanlong; Wang, Tong
2015-05-01
The study of the medical expenditure and its influencing factors among the students enrolling in Urban Resident Basic Medical Insurance (URBMI) in Taiyuan indicated that non response bias and selection bias coexist in dependent variable of the survey data. Unlike previous studies only focused on one missing mechanism, a two-stage method to deal with two missing mechanisms simultaneously was suggested in this study, combining multiple imputation with sample selection model. A total of 1 190 questionnaires were returned by the students (or their parents) selected in child care settings, schools and universities in Taiyuan by stratified cluster random sampling in 2012. In the returned questionnaires, 2.52% existed not missing at random (NMAR) of dependent variable and 7.14% existed missing at random (MAR) of dependent variable. First, multiple imputation was conducted for MAR by using completed data, then sample selection model was used to correct NMAR in multiple imputation, and a multi influencing factor analysis model was established. Based on 1 000 times resampling, the best scheme of filling the random missing values is the predictive mean matching (PMM) method under the missing proportion. With this optimal scheme, a two stage survey was conducted. Finally, it was found that the influencing factors on annual medical expenditure among the students enrolling in URBMI in Taiyuan included population group, annual household gross income, affordability of medical insurance expenditure, chronic disease, seeking medical care in hospital, seeking medical care in community health center or private clinic, hospitalization, hospitalization canceled due to certain reason, self medication and acceptable proportion of self-paid medical expenditure. The two-stage method combining multiple imputation with sample selection model can deal with non response bias and selection bias effectively in dependent variable of the survey data.
Changes in Mobility of Children with Cerebral Palsy over Time and across Environmental Settings
ERIC Educational Resources Information Center
Tieman, Beth L.; Palisano, Robert J.; Gracely, Edward J.; Rosenbaum, Peter L.; Chiarello, Lisa A.; O'Neil, Margaret E.
2004-01-01
This study examined changes in mobility methods of children with cerebral palsy (CP) over time and across environmental settings. Sixty-two children with CP, ages 6-14 years and classified as levels II-IV on the Gross Motor Function Classification System, were randomly selected from a larger data base and followed for three to four years. On each…
What Effect Does Story Time Have on Toddlers' Social and Emotional Skills
ERIC Educational Resources Information Center
Betawi, I. A.
2015-01-01
The purpose of this study is to investigate the effect of story time and reading stories on the development of toddlers' social and emotional skills between 24 and 36 months of age. A sample of 10 toddlers was randomly selected from three different classes at the laboratory nursery of The University of Jordan. A pre-test and post-test were…
NASA Technical Reports Server (NTRS)
2000-01-01
[figure removed for brevity, see original site]
The Mars Global Surveyor (MGS) Mars Orbiter Camera (MOC) orbits the red planet twelve times each day. The number of pictures that MOC can take varies from orbit to orbit, depending upon whether the data are being stored in MGS's onboard tape recorder for playback at a later time, or whether the data are being sent directly back to Earth via a real-time radio link. More data can be acquired during orbits with real-time downlink.During real-time orbits, the MOC team often will take a few random or semi-random pictures in between the carefully-selected, hand-targeted images. On rare occasions, one of these random pictures will surprise the MOC team. The picture shown here is an excellent example, because the high resolution view (top) is centered so nicely on a trough and an adjacent, shallow crater that it is as if someone very carefully selected the target for MOC. The high-resolution view covers an area only 1.1 km (0.7 mi) wide by 2.3 km (1.4 mi) long. Hitting a target such as this with such a small image is very difficult to do, on purpose, because there are small uncertainties in the predicted orbit, the maps used to select targets, and the minor adjustments of spacecraft pointing at any given moment. Nevertheless, a very impressive image was received.The high resolution view crosses one of the troughs of the Sirenum Fossae near 31.2oS, 152.3oW. The context image (above) was acquired at the same time as the high resolution view on July 23, 2000. The small white box shows the location of the high resolution picture. The lines running diagonally across the context image from upper right toward lower left are the Sirenum Fossae troughs, formed by faults that are radial to the volcanic region of Tharsis. Both pictures are illuminated from the upper left. The scene shows part of the martian southern hemisphere nearly autumn.Bradley Palmer, Jaclyn; Lane, Deforia; Mayo, Diane; Schluchter, Mark; Leeming, Rosemary
2015-01-01
Purpose To investigate the effect of live and recorded perioperative music therapy on anesthesia requirements, anxiety levels, recovery time, and patient satisfaction in women experiencing surgery for diagnosis or treatment of breast cancer. Patients and Methods Between 2012 and 2014, 207 female patients undergoing surgery for potential or known breast cancer were randomly assigned to receive either patient-selected live music (LM) preoperatively with therapist-selected recorded music intraoperatively (n = 69), patient-selected recorded music (RM) preoperatively with therapist-selected recorded music intraoperatively (n = 70), or usual care (UC) preoperatively with noise-blocking earmuffs intraoperatively (n = 68). Results The LM and the RM groups did not differ significantly from the UC group in the amount of propofol required to reach moderate sedation. Compared with the UC group, both the LM and the RM groups had greater reductions (P < .001) in anxiety scores preoperatively (mean changes [and standard deviation: −30.9 [36.3], −26.8 [29.3], and 0.0 [22.7]), respectively. The LM and RM groups did not differ from the UC group with respect to recovery time; however, the LM group had a shorter recovery time compared with the RM group (a difference of 12.4 minutes; 95% CI, 2.2 to 22.5; P = .018). Satisfaction scores for the LM and RM groups did not differ from those of the UC group. Conclusion Including music therapy as a complementary modality with cancer surgery may help manage preoperative anxiety in a way that is safe, effective, time-efficient, and enjoyable. PMID:26282640
Research Activity and Scholarly Productivity among Counselor Educators.
ERIC Educational Resources Information Center
Walton, Joseph M.
1982-01-01
Investigated research and scholarly activities among a group of randomly selected counselor educators. Found high and low producers to be distinguishable by years in the present career, preferred professional activity, sex, academic rank, institutional affiliation, institutional size, when the first publication was produced, weekly time devoted to…
47 CFR 1.913 - Application and notification forms; electronic and manual filing.
Code of Federal Regulations, 2014 CFR
2014-10-01
... PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings... Form 601, Application for Authorization in the Wireless Radio Services. FCC Form 601 and associated..., notifications, requests for extension of time, and administrative updates. (2) FCC Form 602, Wireless Radio...
47 CFR 1.913 - Application and notification forms; electronic and manual filing.
Code of Federal Regulations, 2013 CFR
2013-10-01
... PRACTICE AND PROCEDURE Grants by Random Selection Wireless Radio Services Applications and Proceedings... Form 601, Application for Authorization in the Wireless Radio Services. FCC Form 601 and associated..., notifications, requests for extension of time, and administrative updates. (2) FCC Form 602, Wireless Radio...
Vector control of wind turbine on the basis of the fuzzy selective neural net*
NASA Astrophysics Data System (ADS)
Engel, E. A.; Kovalev, I. V.; Engel, N. E.
2016-04-01
An article describes vector control of wind turbine based on fuzzy selective neural net. Based on the wind turbine system’s state, the fuzzy selective neural net tracks an maximum power point under random perturbations. Numerical simulations are accomplished to clarify the applicability and advantages of the proposed vector wind turbine’s control on the basis of the fuzzy selective neuronet. The simulation results show that the proposed intelligent control of wind turbine achieves real-time control speed and competitive performance, as compared to a classical control model with PID controllers based on traditional maximum torque control strategy.
Taxi-Out Time Prediction for Departures at Charlotte Airport Using Machine Learning Techniques
NASA Technical Reports Server (NTRS)
Lee, Hanbong; Malik, Waqar; Jung, Yoon C.
2016-01-01
Predicting the taxi-out times of departures accurately is important for improving airport efficiency and takeoff time predictability. In this paper, we attempt to apply machine learning techniques to actual traffic data at Charlotte Douglas International Airport for taxi-out time prediction. To find the key factors affecting aircraft taxi times, surface surveillance data is first analyzed. From this data analysis, several variables, including terminal concourse, spot, runway, departure fix and weight class, are selected for taxi time prediction. Then, various machine learning methods such as linear regression, support vector machines, k-nearest neighbors, random forest, and neural networks model are applied to actual flight data. Different traffic flow and weather conditions at Charlotte airport are also taken into account for more accurate prediction. The taxi-out time prediction results show that linear regression and random forest techniques can provide the most accurate prediction in terms of root-mean-square errors. We also discuss the operational complexity and uncertainties that make it difficult to predict the taxi times accurately.
Dynamic laser speckle analyzed considering inhomogeneities in the biological sample
NASA Astrophysics Data System (ADS)
Braga, Roberto A.; González-Peña, Rolando J.; Viana, Dimitri Campos; Rivera, Fernando Pujaico
2017-04-01
Dynamic laser speckle phenomenon allows a contactless and nondestructive way to monitor biological changes that are quantified by second-order statistics applied in the images in time using a secondary matrix known as time history of the speckle pattern (THSP). To avoid being time consuming, the traditional way to build the THSP restricts the data to a line or column. Our hypothesis is that the spatial restriction of the information could compromise the results, particularly when undesirable and unexpected optical inhomogeneities occur, such as in cell culture media. It tested a spatial random approach to collect the points to form a THSP. Cells in a culture medium and in drying paint, representing homogeneous samples in different levels, were tested, and a comparison with the traditional method was carried out. An alternative random selection based on a Gaussian distribution around a desired position was also presented. The results showed that the traditional protocol presented higher variation than the outcomes using the random method. The higher the inhomogeneity of the activity map, the higher the efficiency of the proposed method using random points. The Gaussian distribution proved to be useful when there was a well-defined area to monitor.
Physical layer one-time-pad data encryption through synchronized semiconductor laser networks
NASA Astrophysics Data System (ADS)
Argyris, Apostolos; Pikasis, Evangelos; Syvridis, Dimitris
2016-02-01
Semiconductor lasers (SL) have been proven to be a key device in the generation of ultrafast true random bit streams. Their potential to emit chaotic signals under conditions with desirable statistics, establish them as a low cost solution to cover various needs, from large volume key generation to real-time encrypted communications. Usually, only undemanding post-processing is needed to convert the acquired analog timeseries to digital sequences that pass all established tests of randomness. A novel architecture that can generate and exploit these true random sequences is through a fiber network in which the nodes are semiconductor lasers that are coupled and synchronized to central hub laser. In this work we show experimentally that laser nodes in such a star network topology can synchronize with each other through complex broadband signals that are the seed to true random bit sequences (TRBS) generated at several Gb/s. The potential for each node to access real-time generated and synchronized with the rest of the nodes random bit streams, through the fiber optic network, allows to implement an one-time-pad encryption protocol that mixes the synchronized true random bit sequence with real data at Gb/s rates. Forward-error correction methods are used to reduce the errors in the TRBS and the final error rate at the data decoding level. An appropriate selection in the sampling methodology and properties, as well as in the physical properties of the chaotic seed signal through which network locks in synchronization, allows an error free performance.
Probabilistic pathway construction.
Yousofshahi, Mona; Lee, Kyongbum; Hassoun, Soha
2011-07-01
Expression of novel synthesis pathways in host organisms amenable to genetic manipulations has emerged as an attractive metabolic engineering strategy to overproduce natural products, biofuels, biopolymers and other commercially useful metabolites. We present a pathway construction algorithm for identifying viable synthesis pathways compatible with balanced cell growth. Rather than exhaustive exploration, we investigate probabilistic selection of reactions to construct the pathways. Three different selection schemes are investigated for the selection of reactions: high metabolite connectivity, low connectivity and uniformly random. For all case studies, which involved a diverse set of target metabolites, the uniformly random selection scheme resulted in the highest average maximum yield. When compared to an exhaustive search enumerating all possible reaction routes, our probabilistic algorithm returned nearly identical distributions of yields, while requiring far less computing time (minutes vs. years). The pathways identified by our algorithm have previously been confirmed in the literature as viable, high-yield synthesis routes. Prospectively, our algorithm could facilitate the design of novel, non-native synthesis routes by efficiently exploring the diversity of biochemical transformations in nature. Copyright © 2011 Elsevier Inc. All rights reserved.
High capacity low delay packet broadcasting multiaccess schemes for satellite repeater systems
NASA Astrophysics Data System (ADS)
Bose, S. K.
1980-12-01
Demand assigned packet radio schemes using satellite repeaters can achieve high capacities but often exhibit relatively large delays under low traffic conditions when compared to random access. Several schemes which improve delay performance at low traffic but which have high capacity are presented and analyzed. These schemes allow random acess attempts by users, who are waiting for channel assignments. The performance of these are considered in the context of a multiple point communication system carrying fixed length messages between geographically distributed (ground) user terminals which are linked via a satellite repeater. Channel assignments are done following a BCC queueing discipline by a (ground) central controller on the basis of requests correctly received over a collision type access channel. In TBACR Scheme A, some of the forward message channels are set aside for random access transmissions; the rest are used in a demand assigned mode. Schemes B and C operate all their forward message channels in a demand assignment mode but, by means of appropriate algorithms for trailer channel selection, allow random access attempts on unassigned channels. The latter scheme also introduces framing and slotting of the time axis to implement a more efficient algorithm for trailer channel selection than the former.
Cohen-Khait, Ruth; Schreiber, Gideon
2018-04-27
Protein-protein interactions mediate the vast majority of cellular processes. Though protein interactions obey basic chemical principles also within the cell, the in vivo physiological environment may not allow for equilibrium to be reached. Thus, in vitro measured thermodynamic affinity may not provide a complete picture of protein interactions in the biological context. Binding kinetics composed of the association and dissociation rate constants are relevant and important in the cell. Therefore, changes in protein-protein interaction kinetics have a significant impact on the in vivo activity of the proteins. The common protocol for the selection of tighter binders from a mutant library selects for protein complexes with slower dissociation rate constants. Here we describe a method to specifically select for variants with faster association rate constants by using pre-equilibrium selection, starting from a large random library. Toward this end, we refine the selection conditions of a TEM1-β-lactamase library against its natural nanomolar affinity binder β-lactamase inhibitor protein (BLIP). The optimal selection conditions depend on the ligand concentration and on the incubation time. In addition, we show that a second sort of the library helps to separate signal from noise, resulting in a higher percent of faster binders in the selected library. Fast associating protein variants are of particular interest for drug development and other biotechnological applications.
ERIC Educational Resources Information Center
Boothe, James W.; And Others
1994-01-01
Recent "Executive Educator" survey of 900 out of 6,200 randomly selected school executives found high school principals had the longest work week; 95.3% reported working over 50 hours weekly. Fully 78% of school executives are devoting more time to educational improvement changes. Despite stressors and salary complaints, most are content with…
Disengaging from Workplace Relationships. A Research Note
ERIC Educational Resources Information Center
Sias, Patricia M.; Perry, Tara
2004-01-01
A randomly-selected sample of 306 adults employed full-time rated how likely individuals would be to use various communication strategies (cost escalation, depersonalization, and state-of-the-relationship talk) to disengage from a workplace relationship (with the target of the deteriorations being a supervisor, a peer coworker, or a subordinate…
An Evaluation of Four Book Review Journals.
ERIC Educational Resources Information Center
Ream, Daniel
1979-01-01
Evaluates "Booklist,""Choice,""Library Journal," and "the New York Times Book Review" by measuring and comparing (1) total number of adult books reviewed, (2) total number of juvenile and young adult books reviewed, (3) percentage of randomly selected Notable Books reviewed, and (4) how quickly those Notable Books were reviewed. (Author)
Systematic Changes in Families Following Prevention Trials
ERIC Educational Resources Information Center
Patterson, Gerald R.; DeGarmo, David; Forgatch, Marion S.
2004-01-01
A selective prevention design was applied to 238 recently separated families. Of these, 153 mothers randomly assigned to the experimental (E) group participated in 14 group sessions focused on Parent Management Treatment (PMT). Prior analyses showed that, over time, the group of families in the untreated group deteriorated in both parenting…
Exact Markov chains versus diffusion theory for haploid random mating.
Tyvand, Peder A; Thorvaldsen, Steinar
2010-05-01
Exact discrete Markov chains are applied to the Wright-Fisher model and the Moran model of haploid random mating. Selection and mutations are neglected. At each discrete value of time t there is a given number n of diploid monoecious organisms. The evolution of the population distribution is given in diffusion variables, to compare the two models of random mating with their common diffusion limit. Only the Moran model converges uniformly to the diffusion limit near the boundary. The Wright-Fisher model allows the population size to change with the generations. Diffusion theory tends to under-predict the loss of genetic information when a population enters a bottleneck. 2010 Elsevier Inc. All rights reserved.
Pure-phase selective excitation in fast-relaxing systems.
Zangger, K; Oberer, M; Sterk, H
2001-09-01
Selective pulses have been used frequently for small molecules. However, their application to proteins and other macromolecules has been limited. The long duration of shaped-selective pulses and the short T(2) relaxation times in proteins often prohibited the use of highly selective pulses especially on larger biomolecules. A very selective excitation can be obtained within a short time by using the selective excitation sequence presented in this paper. Instead of using a shaped low-intensity radiofrequency pulse, a cluster of hard 90 degrees pulses, delays of free precession, and pulsed field gradients can be used to selectively excite a narrow chemical shift range within a relatively short time. Thereby, off-resonance magnetization, which is allowed to evolve freely during the free precession intervals, is destroyed by the gradient pulses. Off-resonance excitation artifacts can be removed by random variation of the interpulse delays. This leads to an excitation profile with selectivity as well as phase and relaxation behavior superior to that of commonly used shaped-selective pulses. Since the evolution of scalar coupling is inherently suppressed during the double-selective excitation of two different scalar-coupled nuclei, the presented pulse cluster is especially suited for simultaneous highly selective excitation of N-H and C-H fragments. Experimental examples are demonstrated on hen egg white lysozyme (14 kD) and the bacterial antidote ParD (19 kD). Copyright 2001 Academic Press.
Exploring the repetition bias in voluntary task switching.
Mittelstädt, Victor; Dignath, David; Schmidt-Ott, Magdalena; Kiesel, Andrea
2018-01-01
In the voluntary task-switching paradigm, participants are required to randomly select tasks. We reasoned that the consistent finding of a repetition bias (i.e., participants repeat tasks more often than expected by chance) reflects reasonable adaptive task selection behavior to balance the goal of random task selection with the goals to minimize the time and effort for task performance. We conducted two experiments in which participants were provided with variable amount of preview for the non-chosen task stimuli (i.e., potential switch stimuli). We assumed that switch stimuli would initiate some pre-processing resulting in improved performance in switch trials. Results showed that reduced switch costs due to extra-preview in advance of each trial were accompanied by more task switches. This finding is in line with the characteristics of rational adaptive behavior. However, participants were not biased to switch tasks more often than chance despite large switch benefits. We suggest that participants might avoid effortful additional control processes that modulate the effects of preview on task performance and task choice.
ERIC Educational Resources Information Center
Sabha, Raed Adel; Al-Assaf, Jamal Abdel-Fattah
2012-01-01
The study aims to investigate how extent is the time management awareness of the faculty members of the Al-Balqa' Applied university, and its relation to some variables. The study conducted on (150) teachers were selected randomly. For achieving the study goals an appropriate instrument has been built up based on the educational literature and…
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
47 CFR 1.1604 - Post-selection hearings.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 47 Telecommunication 1 2011-10-01 2011-10-01 false Post-selection hearings. 1.1604 Section 1.1604 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL PRACTICE AND PROCEDURE Random Selection Procedures for Mass Media Services General Procedures § 1.1604 Post-selection hearings. (a) Following the random...
Valderrama, Joaquin T; de la Torre, Angel; Medina, Carlos; Segura, Jose C; Thornton, A Roger D
2016-03-01
The recording of auditory evoked potentials (AEPs) at fast rates allows the study of neural adaptation, improves accuracy in estimating hearing threshold and may help diagnosing certain pathologies. Stimulation sequences used to record AEPs at fast rates require to be designed with a certain jitter, i.e., not periodical. Some authors believe that stimuli from wide-jittered sequences may evoke auditory responses of different morphology, and therefore, the time-invariant assumption would not be accomplished. This paper describes a methodology that can be used to analyze the time-invariant assumption in jittered stimulation sequences. The proposed method [Split-IRSA] is based on an extended version of the iterative randomized stimulation and averaging (IRSA) technique, including selective processing of sweeps according to a predefined criterion. The fundamentals, the mathematical basis and relevant implementation guidelines of this technique are presented in this paper. The results of this study show that Split-IRSA presents an adequate performance and that both fast and slow mechanisms of adaptation influence the evoked-response morphology, thus both mechanisms should be considered when time-invariance is assumed. The significance of these findings is discussed. Crown Copyright © 2016. Published by Elsevier B.V. All rights reserved.
Li, Zhi; Chen, Weidong; Lian, Feiyu; Ge, Hongyi; Guan, Aihong
2017-12-01
Quantitative analysis of component mixtures is an important application of terahertz time-domain spectroscopy (THz-TDS) and has attracted broad interest in recent research. Although the accuracy of quantitative analysis using THz-TDS is affected by a host of factors, wavelength selection from the sample's THz absorption spectrum is the most crucial component. The raw spectrum consists of signals from the sample and scattering and other random disturbances that can critically influence the quantitative accuracy. For precise quantitative analysis using THz-TDS, the signal from the sample needs to be retained while the scattering and other noise sources are eliminated. In this paper, a novel wavelength selection method based on differential evolution (DE) is investigated. By performing quantitative experiments on a series of binary amino acid mixtures using THz-TDS, we demonstrate the efficacy of the DE-based wavelength selection method, which yields an error rate below 5%.
Simulation of 'hitch-hiking' genealogies.
Slade, P F
2001-01-01
An ancestral influence graph is derived, an analogue of the coalescent and a composite of Griffiths' (1991) two-locus ancestral graph and Krone and Neuhauser's (1997) ancestral selection graph. This generalizes their use of branching-coalescing random graphs so as to incorporate both selection and recombination into gene genealogies. Qualitative understanding of a 'hitch-hiking' effect on genealogies is pursued via diagrammatic representation of the genealogical process in a two-locus, two-allele haploid model. Extending the simulation technique of Griffiths and Tavare (1996), computational estimation of expected times to the most recent common ancestor of samples of n genes under recombination and selection in two-locus, two-allele haploid and diploid models are presented. Such times are conditional on sample configuration. Monte Carlo simulations show that 'hitch-hiking' is a subtle effect that alters the conditional expected depth of the genealogy at the linked neutral locus depending on a mutation-selection-recombination balance.
Randomized Prediction Games for Adversarial Machine Learning.
Rota Bulo, Samuel; Biggio, Battista; Pillai, Ignazio; Pelillo, Marcello; Roli, Fabio
In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.In spam and malware detection, attackers exploit randomization to obfuscate malicious data and increase their chances of evading detection at test time, e.g., malware code is typically obfuscated using random strings or byte sequences to hide known exploits. Interestingly, randomization has also been proposed to improve security of learning algorithms against evasion attacks, as it results in hiding information about the classifier to the attacker. Recent work has proposed game-theoretical formulations to learn secure classifiers, by simulating different evasion attacks and modifying the classification function accordingly. However, both the classification function and the simulated data manipulations have been modeled in a deterministic manner, without accounting for any form of randomization. In this paper, we overcome this limitation by proposing a randomized prediction game, namely, a noncooperative game-theoretic formulation in which the classifier and the attacker make randomized strategy selections according to some probability distribution defined over the respective strategy set. We show that our approach allows one to improve the tradeoff between attack detection and false alarms with respect to the state-of-the-art secure classifiers, even against attacks that are different from those hypothesized during design, on application examples including handwritten digit recognition, spam, and malware detection.
Blocking for Sequential Political Experiments
Moore, Sally A.
2013-01-01
In typical political experiments, researchers randomize a set of households, precincts, or individuals to treatments all at once, and characteristics of all units are known at the time of randomization. However, in many other experiments, subjects “trickle in” to be randomized to treatment conditions, usually via complete randomization. To take advantage of the rich background data that researchers often have (but underutilize) in these experiments, we develop methods that use continuous covariates to assign treatments sequentially. We build on biased coin and minimization procedures for discrete covariates and demonstrate that our methods outperform complete randomization, producing better covariate balance in simulated data. We then describe how we selected and deployed a sequential blocking method in a clinical trial and demonstrate the advantages of our having done so. Further, we show how that method would have performed in two larger sequential political trials. Finally, we compare causal effect estimates from differences in means, augmented inverse propensity weighted estimators, and randomization test inversion. PMID:24143061
Method and apparatus for signal processing in a sensor system for use in spectroscopy
O'Connor, Paul [Bellport, NY; DeGeronimo, Gianluigi [Nesconset, NY; Grosholz, Joseph [Natrona Heights, PA
2008-05-27
A method for processing pulses arriving randomly in time on at least one channel using multiple peak detectors includes asynchronously selecting a non-busy peak detector (PD) in response to a pulse-generated trigger signal, connecting the channel to the selected PD in response to the trigger signal, and detecting a pulse peak amplitude. Amplitude and time of arrival data are output in first-in first-out (FIFO) sequence. An apparatus includes trigger comparators to generate the trigger signal for the pulse-receiving channel, PDs, a switch for connecting the channel to the selected PD, and logic circuitry which maintains the write pointer. Also included, time-to-amplitude converters (TACs) convert time of arrival to analog voltage and an analog multiplexer provides FIFO output. A multi-element sensor system for spectroscopy includes detector elements, channels, trigger comparators, PDs, a switch, and a logic circuit with asynchronous write pointer. The system includes TACs, a multiplexer and analog-to-digital converter.
NASA Technical Reports Server (NTRS)
Joseph, Robert D.; Hora, Joseph; Stockton, Alan; Hu, Esther; Sanders, David
1997-01-01
This report concerns one of the major observational studies in the ISO Central Programme, the ISO Normal Galaxy Survey. This is a survey of an unbiased sample of spiral and lenticular galaxies selected from the Revised Shapley-Ames Catalog. It is therefore optically-selected, with a brightness limit of blue magnitude = 12, and otherwise randomly chosen. The original sample included 150 galaxies, but this was reduced to 74 when the allocated observing time was expended because the ISO overheads encountered in flight were much larger than predicted.
Yoshida, Masao; Takizawa, Kohei; Suzuki, Sho; Koike, Yoshiki; Nonaka, Satoru; Yamasaki, Yasushi; Minagawa, Takeyoshi; Sato, Chiko; Takeuchi, Chihiro; Watanabe, Ko; Kanzaki, Hiromitsu; Morimoto, Hiroyuki; Yano, Takafumi; Sudo, Kosuke; Mori, Keita; Gotoda, Takuji; Ono, Hiroyuki
2018-05-01
The aim of this study was to clarify whether dental floss clip (DFC) traction improves the technical outcomes of endoscopic submucosal dissection (ESD). A superiority, randomized control trial was conducted at 14 institutions across Japan. Patients with single gastric neoplasm meeting the indications of the Japanese guidelines for gastric treatment were enrolled and assigned to receive conventional ESD or DFC traction-assisted ESD (DFC-ESD). Randomization was performed according to a computer-generated random sequence with stratification by institution, tumor location, tumor size, and operator experience. The primary endpoint was ESD procedure time, defined as the time from the start of the submucosal injection to the end of the tumor removal procedure. Between July 2015 and September 2016, 640 patients underwent randomization. Of these, 316 patients who underwent conventional ESD and 319 patients who underwent DFC-ESD were included in our analysis. The mean ESD procedure time was 60.7 and 58.1 minutes for conventional ESD and DFC-ESD, respectively (P = .45). Perforation was less frequent in the DFC-ESD group (2.2% vs .3%, P = .04). For lesions located in the greater curvature of the upper or middle stomach, the mean procedure time was significantly shorter in the DFC-ESD group (104.1 vs 57.2 minutes, P = .01). Our findings suggest that DFC-ESD does not result in shorter procedure time in the overall patient population, but it can reduce the risk of perforation. When selectively applied to lesions located in the greater curvature of the upper or middle stomach, DFC-ESD provides a remarkable reduction in procedure time. Copyright © 2018 American Society for Gastrointestinal Endoscopy. Published by Elsevier Inc. All rights reserved.
Pourazar, Morteza; Mirakhori, Fatemeh; Hemayattalab, Rasool; Bagherzadeh, Fazlolah
2017-09-21
The purpose of this study was to investigate the training effects of Virtual Reality (VR) intervention program on reaction time in children with cerebral palsy. Thirty boys ranging from 7 to 12 years (mean = 11.20; SD = .76) were selected by available sampling method and randomly divided into the experimental and control groups. Simple Reaction Time (SRT) and Discriminative Reaction Time (DRT) were measured at baseline and 1 day after completion of VR intervention. Multivariate analysis of variance (MANOVA) and paired sample t-test were performed to analyze the results. MANOVA test revealed significant effects for group in posttest phase, with lower reaction time in both measures for the experimental group. Based on paired sample t-test results, both RT measures significantly improved in experimental group following the VR intervention program. This paper proposes VR as a promising tool into the rehabilitation process for improving reaction time in children with cerebral palsy.
The G matrix under fluctuating correlational mutation and selection.
Revell, Liam J
2007-08-01
Theoretical quantitative genetics provides a framework for reconstructing past selection and predicting future patterns of phenotypic differentiation. However, the usefulness of the equations of quantitative genetics for evolutionary inference relies on the evolutionary stability of the additive genetic variance-covariance matrix (G matrix). A fruitful new approach for exploring the evolutionary dynamics of G involves the use of individual-based computer simulations. Previous studies have focused on the evolution of the eigenstructure of G. An alternative approach employed in this paper uses the multivariate response-to-selection equation to evaluate the stability of G. In this approach, I measure similarity by the correlation between response-to-selection vectors due to random selection gradients. I analyze the dynamics of G under several conditions of correlational mutation and selection. As found in a previous study, the eigenstructure of G is stabilized by correlational mutation and selection. However, over broad conditions, instability of G did not result in a decreased consistency of the response to selection. I also analyze the stability of G when the correlation coefficients of correlational mutation and selection and the effective population size change through time. To my knowledge, no prior study has used computer simulations to investigate the stability of G when correlational mutation and selection fluctuate. Under these conditions, the eigenstructure of G is unstable under some simulation conditions. Different results are obtained if G matrix stability is assessed by eigenanalysis or by the response to random selection gradients. In this case, the response to selection is most consistent when certain aspects of the eigenstructure of G are least stable and vice versa.
Species conservation profiles of a random sample of world spiders I: Agelenidae to Filistatidae
Seppälä, Sini; Henriques, Sérgio; Draney, Michael L; Foord, Stefan; Gibbons, Alastair T; Gomez, Luz A; Kariko, Sarah; Malumbres-Olarte, Jagoba; Milne, Marc; Vink, Cor J
2018-01-01
Abstract Background The IUCN Red List of Threatened Species is the most widely used information source on the extinction risk of species. One of the uses of the Red List is to evaluate and monitor the state of biodiversity and a possible approach for this purpose is the Red List Index (RLI). For many taxa, mainly hyperdiverse groups, it is not possible within available resources to assess all known species. In such cases, a random sample of species might be selected for assessment and the results derived from it extrapolated for the entire group - the Sampled Red List Index (SRLI). With the current contribution and the three following papers, we intend to create the first point in time of a future spider SRLI encompassing 200 species distributed across the world. New information A sample of 200 species of spiders were randomly selected from the World Spider Catalogue, an updated global database containing all recognised species names for the group. The 200 selected species where divided taxonomically at the family level and the familes were ordered alphabetically. In this publication, we present the conservation profiles of 46 species belonging to the famillies alphabetically arranged between Agelenidae and Filistatidae, which encompassed Agelenidae, Amaurobiidae, Anyphaenidae, Araneidae, Archaeidae, Barychelidae, Clubionidae, Corinnidae, Ctenidae, Ctenizidae, Cyatholipidae, Dictynidae, Dysderidae, Eresidae and Filistatidae. PMID:29725239
Personal Bankruptcy After Traumatic Brain or Spinal Cord Injury: The Role of Medical Debt
Relyea-Chew, Annemarie; Hollingworth, William; Chan, Leighton; Comstock, Bryan A.; Overstreet, Karen A.; Jarvik, Jeffrey G.
2012-01-01
Objective To estimate the prevalence of medical debt among traumatic brain injury (TBI) and spinal cord injury (SCI) patients who discharged their debts through bankruptcy. Design A cross-sectional comparison of bankruptcy filings of injured versus randomly selected bankruptcy petitioners. Setting Patients hospitalized with SCI or TBI (1996–2002) and personal bankruptcy petitioners (2001–2004) in western Washington State. Participants Subjects (N=186) who filed for bankruptcy, comprised of 93 patients with previous SCI or TBI and 93 randomly selected bankruptcy petitioners. Interventions Not applicable. Main Outcome Measures Medical and nonmedical debt, assets, income, expenses, and employment recorded in the bankruptcy petition. Results Five percent of randomly selected petitioners and 26% of petitioners with TBI or SCI had substantial medical debt (debt that accounted for more than 20% of all unsecured debts). SCI and TBI petitioners had fewer assets and were more likely to be receiving government income assistance at the time of bankruptcy than controls. SCI and TBI patients with a higher blood alcohol content at injury were more likely to have substantial medical debts (odds ratio=2.70; 95% confidence interval, 1.04–7.00). Conclusions Medical debt plays an important role in some bankruptcies after TBI or SCI. We discuss policy options for reducing financial distress after serious injury. PMID:19254605
Personal bankruptcy after traumatic brain or spinal cord injury: the role of medical debt.
Relyea-Chew, Annemarie; Hollingworth, William; Chan, Leighton; Comstock, Bryan A; Overstreet, Karen A; Jarvik, Jeffrey G
2009-03-01
To estimate the prevalence of medical debt among traumatic brain injury (TBI) and spinal cord injury (SCI) patients who discharged their debts through bankruptcy. A cross-sectional comparison of bankruptcy filings of injured versus randomly selected bankruptcy petitioners. Patients hospitalized with SCI or TBI (1996-2002) and personal bankruptcy petitioners (2001-2004) in western Washington State. Subjects (N=186) who filed for bankruptcy, comprised of 93 patients with previous SCI or TBI and 93 randomly selected bankruptcy petitioners. Not applicable. Medical and nonmedical debt, assets, income, expenses, and employment recorded in the bankruptcy petition. Five percent of randomly selected petitioners and 26% of petitioners with TBI or SCI had substantial medical debt (debt that accounted for more than 20% of all unsecured debts). SCI and TBI petitioners had fewer assets and were more likely to be receiving government income assistance at the time of bankruptcy than controls. SCI and TBI patients with a higher blood alcohol content at injury were more likely to have substantial medical debts (odds ratio=2.70; 95% confidence interval, 1.04-7.00). Medical debt plays an important role in some bankruptcies after TBI or SCI. We discuss policy options for reducing financial distress after serious injury.
Species conservation profiles of a random sample of world spiders I: Agelenidae to Filistatidae.
Seppälä, Sini; Henriques, Sérgio; Draney, Michael L; Foord, Stefan; Gibbons, Alastair T; Gomez, Luz A; Kariko, Sarah; Malumbres-Olarte, Jagoba; Milne, Marc; Vink, Cor J; Cardoso, Pedro
2018-01-01
The IUCN Red List of Threatened Species is the most widely used information source on the extinction risk of species. One of the uses of the Red List is to evaluate and monitor the state of biodiversity and a possible approach for this purpose is the Red List Index (RLI). For many taxa, mainly hyperdiverse groups, it is not possible within available resources to assess all known species. In such cases, a random sample of species might be selected for assessment and the results derived from it extrapolated for the entire group - the Sampled Red List Index (SRLI). With the current contribution and the three following papers, we intend to create the first point in time of a future spider SRLI encompassing 200 species distributed across the world. A sample of 200 species of spiders were randomly selected from the World Spider Catalogue, an updated global database containing all recognised species names for the group. The 200 selected species where divided taxonomically at the family level and the familes were ordered alphabetically. In this publication, we present the conservation profiles of 46 species belonging to the famillies alphabetically arranged between Agelenidae and Filistatidae, which encompassed Agelenidae, Amaurobiidae, Anyphaenidae, Araneidae, Archaeidae, Barychelidae, Clubionidae, Corinnidae, Ctenidae, Ctenizidae, Cyatholipidae, Dictynidae, Dysderidae, Eresidae and Filistatidae.
Froehlich, Tanya E; Antonini, Tanya N; Brinkman, William B; Langberg, Joshua M; Simon, John O; Adams, Ryan; Fredstrom, Bridget; Narad, Megan E; Kingery, Kathleen M; Altaye, Mekibib; Matheson, Heather; Tamm, Leanne; Epstein, Jeffery N
2014-01-01
Stimulant medications, such as methylphenidate (MPH), improve the academic performance of children with attention-deficit hyperactivity disorder (ADHD). However, the mechanism by which MPH exerts an effect on academic performance is unclear. We examined MPH effects on math performance and investigated possible mediation of MPH effects by changes in time on-task, inhibitory control, selective attention, and reaction time variability. Children with ADHD aged 7 to 11 years (N = 93) completed a timed math worksheet (with problems tailored to each individual's level of proficiency) and 2 neuropsychological tasks (Go/No-Go and Child Attention Network Test) at baseline, then participated in a 4-week, randomized, controlled, titration trial of MPH. Children were then randomly assigned to their optimal MPH dose or placebo for 1 week (administered double-blind) and repeated the math and neuropsychological tasks (posttest). Baseline and posttest videorecordings of children performing the math task were coded to assess time on-task. Children taking MPH completed 23 more math problems at posttest compared to baseline, whereas the placebo group completed 24 fewer problems on posttest versus baseline, but the effects on math accuracy (percent correct) did not differ. Path analyses revealed that only change in time on-task was a significant mediator of MPH's improvements in math productivity. MPH-derived math productivity improvements may be explained in part by increased time spent on-task, rather than improvements in neurocognitive parameters, such as inhibitory control, selective attention, or reaction time variability.
Kitamura, Katsuya; Yamamiya, Akira; Ishii, Yu; Sato, Yoshiki; Iwata, Tomoyuki; Nomoto, Tomohiro; Ikegami, Akitoshi; Yoshida, Hitoshi
2015-01-01
AIM: To compare the clinical outcomes between 0.025-inch and 0.035-inch guide wires (GWs) when used in wire-guided cannulation (WGC). METHODS: A single center, randomized study was conducted between April 2011 and March 2013. This study was approved by the Medical Ethics Committee at our hospital. Informed, written consent was obtained from each patient prior to study enrollment. Three hundred and twenty-two patients with a naïve papilla of Vater who underwent endoscopic retrograde cholangiopancreatography (ERCP) for the purpose of selective bile duct cannulation with WGC were enrolled in this study. Fifty-three patients were excluded based on the exclusion criteria, and 269 patients were randomly allocated to two groups by a computer and analyzed: the 0.025-inch GW group (n = 109) and the 0.035-inch GW group (n = 160). The primary endpoint was the success rate of selective bile duct cannulation with WGC. Secondary endpoints were the success rates of the pancreatic GW technique and precutting, selective bile duct cannulation time, ERCP procedure time, the rate of pancreatic duct stent placement, the final success rate of selective bile duct cannulation, and the incidence of post-ERCP pancreatitis (PEP). RESULTS: The primary success rates of selective bile duct cannulation with WGC were 80.7% (88/109) and 86.3% (138/160) for the 0.025-inch and the 0.035-inch groups, respectively (P = 0.226). There were no statistically significant differences in the success rates of selective bile duct cannulation using the pancreatic duct GW technique (46.7% vs 52.4% for the 0.025-inch and 0.035-inch groups, respectively; P = 0.884) or in the success rates of selective bile duct cannulation using precutting (66.7% vs 63.6% for the 0.025-inch and 0.035-inch groups, respectively; P = 0.893). The final success rates for selective bile duct cannulation using these procedures were 92.7% (101/109) and 97.5% (156/160) for the 0.025-inch and 0.035-inch groups, respectively (P = 0.113). There were no significant differences in selective bile duct cannulation time (median ± interquartile range: 3.7 ± 13.9 min vs 4.0 ± 11.2 min for the 0.025-inch and 0.035-inch groups, respectively; P = 0.851), ERCP procedure time (median ± interquartile range: 32 ± 29 min vs 30 ± 25 min for the 0.025-inch and 0.035-inch groups, respectively; P = 0.184) or in the rate of pancreatic duct stent placement (14.7% vs 15.6% for the 0.025-inch and 0.035-inch groups, respectively; P = 0.832). The incidence of PEP was 2.8% (3/109) and 2.5% (4/160) for the 0.025-inch and 0.035-inch groups, respectively (P = 0.793). CONCLUSION: The thickness of the GW for WGC does not appear to affect either the success rate of selective bile duct cannulation or the incidence of PEP. PMID:26290646
Pleiotropic Models of Polygenic Variation, Stabilizing Selection, and Epistasis
Gavrilets, S.; de-Jong, G.
1993-01-01
We show that in polymorphic populations many polygenic traits pleiotropically related to fitness are expected to be under apparent ``stabilizing selection'' independently of the real selection acting on the population. This occurs, for example, if the genetic system is at a stable polymorphic equilibrium determined by selection and the nonadditive contributions of the loci to the trait value either are absent, or are random and independent of those to fitness. Stabilizing selection is also observed if the polygenic system is at an equilibrium determined by a balance between selection and mutation (or migration) when both additive and nonadditive contributions of the loci to the trait value are random and independent of those to fitness. We also compare different viability models that can maintain genetic variability at many loci with respect to their ability to account for the strong stabilizing selection on an additive trait. Let V(m) be the genetic variance supplied by mutation (or migration) each generation, V(g) be the genotypic variance maintained in the population, and n be the number of the loci influencing fitness. We demonstrate that in mutation (migration)-selection balance models the strength of apparent stabilizing selection is order V(m)/V(g). In the overdominant model and in the symmetric viability model the strength of apparent stabilizing selection is approximately 1/(2n) that of total selection on the whole phenotype. We show that a selection system that involves pairwise additive by additive epistasis in maintaining variability can lead to a lower genetic load and genetic variance in fitness (approximately 1/(2n) times) than an equivalent selection system that involves overdominance. We show that, in the epistatic model, the apparent stabilizing selection on an additive trait can be as strong as the total selection on the whole phenotype. PMID:8325491
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A.; Burgueño, Juan; Pérez-Rodríguez, Paulino; de los Campos, Gustavo
2016-01-01
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects (u) that can be assessed by the Kronecker product of variance–covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model (u) plus an extra component, f, that captures random effects between environments that were not captured by the random effects u. We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with u and f over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect u. PMID:27793970
Bayesian Genomic Prediction with Genotype × Environment Interaction Kernel Models.
Cuevas, Jaime; Crossa, José; Montesinos-López, Osval A; Burgueño, Juan; Pérez-Rodríguez, Paulino; de Los Campos, Gustavo
2017-01-05
The phenomenon of genotype × environment (G × E) interaction in plant breeding decreases selection accuracy, thereby negatively affecting genetic gains. Several genomic prediction models incorporating G × E have been recently developed and used in genomic selection of plant breeding programs. Genomic prediction models for assessing multi-environment G × E interaction are extensions of a single-environment model, and have advantages and limitations. In this study, we propose two multi-environment Bayesian genomic models: the first model considers genetic effects [Formula: see text] that can be assessed by the Kronecker product of variance-covariance matrices of genetic correlations between environments and genomic kernels through markers under two linear kernel methods, linear (genomic best linear unbiased predictors, GBLUP) and Gaussian (Gaussian kernel, GK). The other model has the same genetic component as the first model [Formula: see text] plus an extra component, F: , that captures random effects between environments that were not captured by the random effects [Formula: see text] We used five CIMMYT data sets (one maize and four wheat) that were previously used in different studies. Results show that models with G × E always have superior prediction ability than single-environment models, and the higher prediction ability of multi-environment models with [Formula: see text] over the multi-environment model with only u occurred 85% of the time with GBLUP and 45% of the time with GK across the five data sets. The latter result indicated that including the random effect f is still beneficial for increasing prediction ability after adjusting by the random effect [Formula: see text]. Copyright © 2017 Cuevas et al.
Evaluation of new collision-pair selection models in DSMC
NASA Astrophysics Data System (ADS)
Akhlaghi, Hassan; Roohi, Ehsan
2017-10-01
The current paper investigates new collision-pair selection procedures in a direct simulation Monte Carlo (DSMC) method. Collision partner selection based on the random procedure from nearest neighbor particles and deterministic selection of nearest neighbor particles have already been introduced as schemes that provide accurate results in a wide range of problems. In the current research, new collision-pair selections based on the time spacing and direction of the relative movement of particles are introduced and evaluated. Comparisons between the new and existing algorithms are made considering appropriate test cases including fluctuations in homogeneous gas, 2D equilibrium flow, and Fourier flow problem. Distribution functions for number of particles and collisions in cell, velocity components, and collisional parameters (collision separation, time spacing, relative velocity, and the angle between relative movements of particles) are investigated and compared with existing analytical relations for each model. The capability of each model in the prediction of the heat flux in the Fourier problem at different cell numbers, numbers of particles, and time steps is examined. For new and existing collision-pair selection schemes, the effect of an alternative formula for the number of collision-pair selections and avoiding repetitive collisions are investigated via the prediction of the Fourier heat flux. The simulation results demonstrate the advantages and weaknesses of each model in different test cases.
Vietnamese Amerasians: Practical Implications of Current Research.
ERIC Educational Resources Information Center
Felsman, J. Kirk.; And Others
This study was conducted to examine the experiences of Amerasians from Vietnam who resettled in the United States and to explore coping and adaptation among Vietnamese Amerasians over time. The initial data collection phase involved in-camp assessment of a randomly selected sample of Amerasian adolescents (N=259). The assessment is to be repeated…
Findings from a Randomized Experiment of Playworks: Selected Results from Cohort 1
ERIC Educational Resources Information Center
Bleeker, Martha; James-Burdumy, Susanne; Beyler, Nicholas; Dodd, Allison Hedley; London, Rebecca A.; Westrich, Lisa; Stokes-Guinan, Katie; Castrechini, Sebastian
2012-01-01
Recess periods often lack the structure needed to support physical activity and positive social development (Robert Wood Johnson Foundation 2010). The Playworks program places full-time coaches in low-income schools to provide opportunities for organized play during recess and throughout the school day. Playworks activities are designed to engage…
Code of Federal Regulations, 2014 CFR
2014-10-01
... 42 Public Health 3 2014-10-01 2014-10-01 false Surveys. 416.140 Section 416.140 Public Health... Furnished Before January 1, 2008 § 416.140 Surveys. (a) Timing, purpose, and procedures. (1) No more often than once a year, CMS conducts a survey of a randomly selected sample of participating ASCs to collect...
Code of Federal Regulations, 2011 CFR
2011-10-01
... 42 Public Health 3 2011-10-01 2011-10-01 false Surveys. 416.140 Section 416.140 Public Health... January 1, 2008 § 416.140 Surveys. (a) Timing, purpose, and procedures. (1) No more often than once a year, CMS conducts a survey of a randomly selected sample of participating ASCs to collect data for analysis...
Code of Federal Regulations, 2013 CFR
2013-10-01
... 42 Public Health 3 2013-10-01 2013-10-01 false Surveys. 416.140 Section 416.140 Public Health... Furnished Before January 1, 2008 § 416.140 Surveys. (a) Timing, purpose, and procedures. (1) No more often than once a year, CMS conducts a survey of a randomly selected sample of participating ASCs to collect...
Code of Federal Regulations, 2012 CFR
2012-10-01
... 42 Public Health 3 2012-10-01 2012-10-01 false Surveys. 416.140 Section 416.140 Public Health... Furnished Before January 1, 2008 § 416.140 Surveys. (a) Timing, purpose, and procedures. (1) No more often than once a year, CMS conducts a survey of a randomly selected sample of participating ASCs to collect...
Code of Federal Regulations, 2010 CFR
2010-10-01
... 42 Public Health 3 2010-10-01 2010-10-01 false Surveys. 416.140 Section 416.140 Public Health... January 1, 2008 § 416.140 Surveys. (a) Timing, purpose, and procedures. (1) No more often than once a year, CMS conducts a survey of a randomly selected sample of participating ASCs to collect data for analysis...
ERIC Educational Resources Information Center
Zatzick, Douglas F.; Grossman, David C.; Russo, Joan; Pynoos, Robert; Berliner, Lucy; Jurkovich, Gregory; Sabin, Janice A.; Katon, Wayne; Ghesquiere, Angela; McCauley, Elizabeth; Rivara, Frederick P.
2006-01-01
Objective: Adolescents constitute a high-risk population for traumatic physical injury, yet few longitudinal investigations have assessed the development of posttraumatic stress disorder (PTSD) symptoms over time in representative samples. Method: Between July 2002 and August 2003,108 randomly selected injured adolescent patients ages 12 to 18 and…
Many States Include Evolution Questions on Assessments
ERIC Educational Resources Information Center
Cavanagh, Sean
2005-01-01
The theory of evolution, pioneered most famously by Charles Darwin, posits that humans and other living creatures have descended from common ancestors over time through a process of random mutation and natural selection. It is widely considered to be a pillar of modern biology. Over the past year, however, public education has been roiled by…
77 FR 40345 - Proposed Agency Information Collection
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-09
... and Security Act and signed into law on December 19, 2007, was funded for the first time by the ARRA... applied to select 350 grants for study. That random sample of projects will be taken from the six Broad... program impacts and other metrics. Also contains a series of questions related to grant performance not...
NASA Astrophysics Data System (ADS)
Rochman, Auliya Noor; Prasetyo, Hari; Nugroho, Munajat Tri
2017-06-01
Vehicle Routing Problem (VRP) often occurs when the manufacturers need to distribute their product to some customers/outlets. The distribution process is typically restricted by the capacity of the vehicle and the working hours at the distributor. This type of VRP is also known as Capacitated Vehicle Routing Problem with Time Windows (CVRPTW). A Biased Random Key Genetic Algorithm (BRKGA) was designed and coded in MATLAB to solve the CVRPTW case of soft drink distribution. The standard BRKGA was then modified by applying chromosome insertion into the initial population and defining chromosome gender for parent undergoing crossover operation. The performance of the established algorithms was then compared to a heuristic procedure for solving a soft drink distribution. Some findings are revealed (1) the total distribution cost of BRKGA with insertion (BRKGA-I) results in a cost saving of 39% compared to the total cost of heuristic method, (2) BRKGA with the gender selection (BRKGA-GS) could further improve the performance of the heuristic method. However, the BRKGA-GS tends to yield worse results compared to that obtained from the standard BRKGA.
Monte Carlo investigation of thrust imbalance of solid rocket motor pairs
NASA Technical Reports Server (NTRS)
Sforzini, R. H.; Foster, W. A., Jr.
1976-01-01
The Monte Carlo method of statistical analysis is used to investigate the theoretical thrust imbalance of pairs of solid rocket motors (SRMs) firing in parallel. Sets of the significant variables are selected using a random sampling technique and the imbalance calculated for a large number of motor pairs using a simplified, but comprehensive, model of the internal ballistics. The treatment of burning surface geometry allows for the variations in the ovality and alignment of the motor case and mandrel as well as those arising from differences in the basic size dimensions and propellant properties. The analysis is used to predict the thrust-time characteristics of 130 randomly selected pairs of Titan IIIC SRMs. A statistical comparison of the results with test data for 20 pairs shows the theory underpredicts the standard deviation in maximum thrust imbalance by 20% with variability in burning times matched within 2%. The range in thrust imbalance of Space Shuttle type SRM pairs is also estimated using applicable tolerances and variabilities and a correction factor based on the Titan IIIC analysis.
Pandis, Nikolaos; Polychronopoulou, Argy; Eliades, Theodore
2011-12-01
Randomization is a key step in reducing selection bias during the treatment allocation phase in randomized clinical trials. The process of randomization follows specific steps, which include generation of the randomization list, allocation concealment, and implementation of randomization. The phenomenon in the dental and orthodontic literature of characterizing treatment allocation as random is frequent; however, often the randomization procedures followed are not appropriate. Randomization methods assign, at random, treatment to the trial arms without foreknowledge of allocation by either the participants or the investigators thus reducing selection bias. Randomization entails generation of random allocation, allocation concealment, and the actual methodology of implementing treatment allocation randomly and unpredictably. Most popular randomization methods include some form of restricted and/or stratified randomization. This article introduces the reasons, which make randomization an integral part of solid clinical trial methodology, and presents the main randomization schemes applicable to clinical trials in orthodontics.
Borak, T B
1986-04-01
Periodic grab sampling in combination with time-of-occupancy surveys has been the accepted procedure for estimating the annual exposure of underground U miners to Rn daughters. Temporal variations in the concentration of potential alpha energy in the mine generate uncertainties in this process. A system to randomize the selection of locations for measurement is described which can reduce uncertainties and eliminate systematic biases in the data. In general, a sample frequency of 50 measurements per year is sufficient to satisfy the criteria that the annual exposure be determined in working level months to within +/- 50% of the true value with a 95% level of confidence. Suggestions for implementing this randomization scheme are presented.
Abstinence versus Moderation Goals in Brief Motivational Treatment for Pathological Gambling.
Stea, Jonathan N; Hodgins, David C; Fung, Tak
2015-09-01
The present study examined the nature and impact of participant goal selection (abstinence versus moderation) in brief motivational treatment for pathological gambling via secondary analyses from a randomized controlled trial. The results demonstrated that the pattern of goal selection over time could be characterized by both fluidity and stability, whereby almost half of participants switched their goal at least one time, over 25% of participants selected an unchanging goal of 'quit most problematic type of gambling', almost 20% selected an unchanging goal of 'quit all types of gambling', and approximately 10% selected an unchanging goal of 'gamble in a controlled manner.' The results also demonstrated that pretreatment goal selection was uniquely associated with three variables, whereby compared to participants who selected the goal to 'cut back on problem gambling', those who selected the goal to 'quit problem gambling' were more likely to have greater gambling problem severity, to have identified video lottery terminal play as problematic, and to have greater motivation to overcome their gambling problem. Finally, the results demonstrated that goal selection over time had an impact on the average number of days gambled over the course of treatment, whereby those with abstinence-based goals gambled significantly fewer days than those with moderation-based goals. Nevertheless, goal selection over time was not related to dollars gambled, dollars per day gambled, or perceived goal achievement. The findings do not support the contention that abstinence-based goals are more advantageous than moderation goals and are discussed in relation to the broader alcohol treatment literature.
Complete Numerical Solution of the Diffusion Equation of Random Genetic Drift
Zhao, Lei; Yue, Xingye; Waxman, David
2013-01-01
A numerical method is presented to solve the diffusion equation for the random genetic drift that occurs at a single unlinked locus with two alleles. The method was designed to conserve probability, and the resulting numerical solution represents a probability distribution whose total probability is unity. We describe solutions of the diffusion equation whose total probability is unity as complete. Thus the numerical method introduced in this work produces complete solutions, and such solutions have the property that whenever fixation and loss can occur, they are automatically included within the solution. This feature demonstrates that the diffusion approximation can describe not only internal allele frequencies, but also the boundary frequencies zero and one. The numerical approach presented here constitutes a single inclusive framework from which to perform calculations for random genetic drift. It has a straightforward implementation, allowing it to be applied to a wide variety of problems, including those with time-dependent parameters, such as changing population sizes. As tests and illustrations of the numerical method, it is used to determine: (i) the probability density and time-dependent probability of fixation for a neutral locus in a population of constant size; (ii) the probability of fixation in the presence of selection; and (iii) the probability of fixation in the presence of selection and demographic change, the latter in the form of a changing population size. PMID:23749318
Koloušková, Pavla; Stone, James D.
2017-01-01
Accurate gene expression measurements are essential in studies of both crop and wild plants. Reverse transcription quantitative real-time PCR (RT-qPCR) has become a preferred tool for gene expression estimation. A selection of suitable reference genes for the normalization of transcript levels is an essential prerequisite of accurate RT-qPCR results. We evaluated the expression stability of eight candidate reference genes across roots, leaves, flower buds and pollen of Silene vulgaris (bladder campion), a model plant for the study of gynodioecy. As random priming of cDNA is recommended for the study of organellar transcripts and poly(A) selection is indicated for nuclear transcripts, we estimated gene expression with both random-primed and oligo(dT)-primed cDNA. Accordingly, we determined reference genes that perform well with oligo(dT)- and random-primed cDNA, making it possible to estimate levels of nucleus-derived transcripts in the same cDNA samples as used for organellar transcripts, a key benefit in studies of cyto-nuclear interactions. Gene expression variance was estimated by RefFinder, which integrates four different analytical tools. The SvACT and SvGAPDH genes were the most stable candidates across various organs of S. vulgaris, regardless of whether pollen was included or not. PMID:28817728
Dietrich, Stefan; Floegel, Anna; Troll, Martina; Kühn, Tilman; Rathmann, Wolfgang; Peters, Anette; Sookthai, Disorn; von Bergen, Martin; Kaaks, Rudolf; Adamski, Jerzy; Prehn, Cornelia; Boeing, Heiner; Schulze, Matthias B; Illig, Thomas; Pischon, Tobias; Knüppel, Sven; Wang-Sattler, Rui; Drogan, Dagmar
2016-10-01
The application of metabolomics in prospective cohort studies is statistically challenging. Given the importance of appropriate statistical methods for selection of disease-associated metabolites in highly correlated complex data, we combined random survival forest (RSF) with an automated backward elimination procedure that addresses such issues. Our RSF approach was illustrated with data from the European Prospective Investigation into Cancer and Nutrition (EPIC)-Potsdam study, with concentrations of 127 serum metabolites as exposure variables and time to development of type 2 diabetes mellitus (T2D) as outcome variable. Out of this data set, Cox regression with a stepwise selection method was recently published. Replication of methodical comparison (RSF and Cox regression) was conducted in two independent cohorts. Finally, the R-code for implementing the metabolite selection procedure into the RSF-syntax is provided. The application of the RSF approach in EPIC-Potsdam resulted in the identification of 16 incident T2D-associated metabolites which slightly improved prediction of T2D when used in addition to traditional T2D risk factors and also when used together with classical biomarkers. The identified metabolites partly agreed with previous findings using Cox regression, though RSF selected a higher number of highly correlated metabolites. The RSF method appeared to be a promising approach for identification of disease-associated variables in complex data with time to event as outcome. The demonstrated RSF approach provides comparable findings as the generally used Cox regression, but also addresses the problem of multicollinearity and is suitable for high-dimensional data. © The Author 2016; all rights reserved. Published by Oxford University Press on behalf of the International Epidemiological Association.
Intrathecal morphine for analgesia in children undergoing selective dorsal rhizotomy.
Dews, T E; Schubert, A; Fried, A; Ebrahim, Z; Oswalt, K; Paranandi, L
1996-03-01
Selective dorsal root rhizotomy is performed for relief of spasticity in children with cerebral palsy. Postoperative pain relief can be provided by intrathecal morphine administered at the time of the procedure. We sought to define an optimal dose of intrathecal morphine in children undergoing selective rhizotomy, through a randomized, double-blinded prospective trial. After institutional approval and parental written informed consent, 27 patients, ages 3-10 years, were randomized to receive 10, 20, or 30 micrograms.kg-1 (Groups A, B, and C, respectively) of preservative-free morphine administered intrathecally by the surgeon after dural closure. Postoperatively, vital signs, pulse oximetry, and pain intensity scores were recorded hourly for 24 hr. Supplemental intravenous morphine was administered postoperatively according to a predetermined schedule based on pain scores. There was considerable individual variability in the time to initial morphine dosing and cumulative supplemental morphine dose. Time to first supplemental morphine dose was not different between groups. When compared to Groups A and B, cumulative 6-hr supplemental morphine dose was significantly lower in Group C (38.6 +/- 47 micrograms versus 79.1 +/- 74 and 189.6 +/- 126 for Groups A and B, respectively). By 12 hr, cumulative supplemental morphine dose was similar in Groups A and C. Group B consistently had a higher supplemental dose requirement than Groups A and C at 6, 12, and 18 hr. By 24 hr, there was no difference in cumulative dose among groups. Postoperative pain scores and the incidence of respiratory events, nausea, vomiting and pruritus were comparable among groups. These data suggest that intrathecal morphine at 30 micrograms.kg-1 provides the most intense analgesia at 6 hr following selective dorsal root rhizotomy, but was otherwise comparable to the 10 micrograms.kg-1 dose.
Crime Victimization in Adults With Severe Mental Illness
Teplin, Linda A.; McClelland, Gary M.; Abram, Karen M.; Weiner, Dana A.
2006-01-01
Context Since deinstitutionalization, most persons with severe mental illness (SMI) now live in the community, where they are at great risk for crime victimization. Objectives To determine the prevalence and incidence of crime victimization among persons with SMI by sex, race/ethnicity, and age, and to compare rates with general population data (the National Crime Victimization Survey), controlling for income and demographic differences between the samples. Design Epidemiologic study of persons in treatment. Independent master’s-level clinical research interviewers administered the National Crime Victimization Survey to randomly selected patients sampled from 16 randomly selected mental health agencies. Setting Sixteen agencies providing outpatient, day, and residential treatment to persons with SMI in Chicago, Ill. Participants Randomly selected, stratified sample of 936 patients aged 18 or older (483 men, 453 women) who were African American (n = 329), non-Hispanic white (n = 321), Hispanic (n = 270), or other race/ethnicity (n = 22). The comparison group comprised 32449 participants in the National Crime Victimization Survey. Main Outcome Measure National Crime Victimization Survey, developed by the Bureau of Justice Statistics. Results More than one quarter of persons with SMI had been victims of a violent crime in the past year, a rate more than 11 times higher than the general population rates even after controlling for demographic differences between the 2 samples (P<.001). The annual incidence of violent crime in the SMI sample (168.2 incidents per 1000 persons) is more than 4 times higher than the general population rates (39.9 incidents per 1000 persons) (P<.001). Depending on the type of violent crime (rape/sexual assault, robbery, assault, and their subcategories), prevalence was 6 to 23 times greater among persons with SMI than among the general population. Conclusions Crime victimization is a major public health problem among persons with SMI who are treated in the community. We recommend directions for future research, propose modifications in public policy, and suggest how the mental health system can respond to reduce victimization and its consequences. PMID:16061769
Teplin, Linda A; McClelland, Gary M; Abram, Karen M; Weiner, Dana A
2005-08-01
Since deinstitutionalization, most persons with severe mental illness (SMI) now live in the community, where they are at great risk for crime victimization. To determine the prevalence and incidence of crime victimization among persons with SMI by sex, race/ethnicity, and age, and to compare rates with general population data (the National Crime Victimization Survey), controlling for income and demographic differences between the samples. Epidemiologic study of persons in treatment. Independent master's-level clinical research interviewers administered the National Crime Victimization Survey to randomly selected patients sampled from 16 randomly selected mental health agencies. Sixteen agencies providing outpatient, day, and residential treatment to persons with SMI in Chicago, Ill. Randomly selected, stratified sample of 936 patients aged 18 or older (483 men, 453 women) who were African American (n = 329), non-Hispanic white (n = 321), Hispanic (n = 270), or other race/ethnicity (n = 22). The comparison group comprised 32 449 participants in the National Crime Victimization Survey. National Crime Victimization Survey, developed by the Bureau of Justice Statistics. More than one quarter of persons with SMI had been victims of a violent crime in the past year, a rate more than 11 times higher than the general population rates even after controlling for demographic differences between the 2 samples (P<.001). The annual incidence of violent crime in the SMI sample (168.2 incidents per 1000 persons) is more than 4 times higher than the general population rates (39.9 incidents per 1000 persons) (P<.001). Depending on the type of violent crime (rape/sexual assault, robbery, assault, and their subcategories), prevalence was 6 to 23 times greater among persons with SMI than among the general population. Crime victimization is a major public health problem among persons with SMI who are treated in the community. We recommend directions for future research, propose modifications in public policy, and suggest how the mental health system can respond to reduce victimization and its consequences.
Clustering of financial time series with application to index and enhanced index tracking portfolio
NASA Astrophysics Data System (ADS)
Dose, Christian; Cincotti, Silvano
2005-09-01
A stochastic-optimization technique based on time series cluster analysis is described for index tracking and enhanced index tracking problems. Our methodology solves the problem in two steps, i.e., by first selecting a subset of stocks and then setting the weight of each stock as a result of an optimization process (asset allocation). Present formulation takes into account constraints on the number of stocks and on the fraction of capital invested in each of them, whilst not including transaction costs. Computational results based on clustering selection are compared to those of random techniques and show the importance of clustering in noise reduction and robust forecasting applications, in particular for enhanced index tracking.
Sugarman, R.M.
1960-08-30
An oscilloscope is designed for displaying transient signal waveforms having random time and amplitude distributions. The oscilloscopc is a sampling device that selects for display a portion of only those waveforms having a particular range of amplitudes. For this purpose a pulse-height analyzer is provided to screen the pulses. A variable voltage-level shifter and a time-scale rampvoltage generator take the pulse height relative to the start of the waveform. The variable voltage shifter produces a voltage level raised one step for each sequential signal waveform to be sampled and this results in an unsmeared record of input signal waveforms. Appropriate delay devices permit each sample waveform to pass its peak amplitude before the circuit selects it for display.
Nasejje, Justine B; Mwambi, Henry; Dheda, Keertan; Lesosky, Maia
2017-07-28
Random survival forest (RSF) models have been identified as alternative methods to the Cox proportional hazards model in analysing time-to-event data. These methods, however, have been criticised for the bias that results from favouring covariates with many split-points and hence conditional inference forests for time-to-event data have been suggested. Conditional inference forests (CIF) are known to correct the bias in RSF models by separating the procedure for the best covariate to split on from that of the best split point search for the selected covariate. In this study, we compare the random survival forest model to the conditional inference model (CIF) using twenty-two simulated time-to-event datasets. We also analysed two real time-to-event datasets. The first dataset is based on the survival of children under-five years of age in Uganda and it consists of categorical covariates with most of them having more than two levels (many split-points). The second dataset is based on the survival of patients with extremely drug resistant tuberculosis (XDR TB) which consists of mainly categorical covariates with two levels (few split-points). The study findings indicate that the conditional inference forest model is superior to random survival forest models in analysing time-to-event data that consists of covariates with many split-points based on the values of the bootstrap cross-validated estimates for integrated Brier scores. However, conditional inference forests perform comparably similar to random survival forests models in analysing time-to-event data consisting of covariates with fewer split-points. Although survival forests are promising methods in analysing time-to-event data, it is important to identify the best forest model for analysis based on the nature of covariates of the dataset in question.
Enhancing the Selection of Backoff Interval Using Fuzzy Logic over Wireless Ad Hoc Networks
Ranganathan, Radha; Kannan, Kathiravan
2015-01-01
IEEE 802.11 is the de facto standard for medium access over wireless ad hoc network. The collision avoidance mechanism (i.e., random binary exponential backoff—BEB) of IEEE 802.11 DCF (distributed coordination function) is inefficient and unfair especially under heavy load. In the literature, many algorithms have been proposed to tune the contention window (CW) size. However, these algorithms make every node select its backoff interval between [0, CW] in a random and uniform manner. This randomness is incorporated to avoid collisions among the nodes. But this random backoff interval can change the optimal order and frequency of channel access among competing nodes which results in unfairness and increased delay. In this paper, we propose an algorithm that schedules the medium access in a fair and effective manner. This algorithm enhances IEEE 802.11 DCF with additional level of contention resolution that prioritizes the contending nodes according to its queue length and waiting time. Each node computes its unique backoff interval using fuzzy logic based on the input parameters collected from contending nodes through overhearing. We evaluate our algorithm against IEEE 802.11, GDCF (gentle distributed coordination function) protocols using ns-2.35 simulator and show that our algorithm achieves good performance. PMID:25879066
Collaborative emitter tracking using Rao-Blackwellized random exchange diffusion particle filtering
NASA Astrophysics Data System (ADS)
Bruno, Marcelo G. S.; Dias, Stiven S.
2014-12-01
We introduce in this paper the fully distributed, random exchange diffusion particle filter (ReDif-PF) to track a moving emitter using multiple received signal strength (RSS) sensors. We consider scenarios with both known and unknown sensor model parameters. In the unknown parameter case, a Rao-Blackwellized (RB) version of the random exchange diffusion particle filter, referred to as the RB ReDif-PF, is introduced. In a simulated scenario with a partially connected network, the proposed ReDif-PF outperformed a PF tracker that assimilates local neighboring measurements only and also outperformed a linearized random exchange distributed extended Kalman filter (ReDif-EKF). Furthermore, the novel ReDif-PF matched the tracking error performance of alternative suboptimal distributed PFs based respectively on iterative Markov chain move steps and selective average gossiping with an inter-node communication cost that is roughly two orders of magnitude lower than the corresponding cost for the Markov chain and selective gossip filters. Compared to a broadcast-based filter which exactly mimics the optimal centralized tracker or its equivalent (exact) consensus-based implementations, ReDif-PF showed a degradation in steady-state error performance. However, compared to the optimal consensus-based trackers, ReDif-PF is better suited for real-time applications since it does not require iterative inter-node communication between measurement arrivals.
Time Series Analysis for Spatial Node Selection in Environment Monitoring Sensor Networks
Bhandari, Siddhartha; Jurdak, Raja; Kusy, Branislav
2017-01-01
Wireless sensor networks are widely used in environmental monitoring. The number of sensor nodes to be deployed will vary depending on the desired spatio-temporal resolution. Selecting an optimal number, position and sampling rate for an array of sensor nodes in environmental monitoring is a challenging question. Most of the current solutions are either theoretical or simulation-based where the problems are tackled using random field theory, computational geometry or computer simulations, limiting their specificity to a given sensor deployment. Using an empirical dataset from a mine rehabilitation monitoring sensor network, this work proposes a data-driven approach where co-integrated time series analysis is used to select the number of sensors from a short-term deployment of a larger set of potential node positions. Analyses conducted on temperature time series show 75% of sensors are co-integrated. Using only 25% of the original nodes can generate a complete dataset within a 0.5 °C average error bound. Our data-driven approach to sensor position selection is applicable for spatiotemporal monitoring of spatially correlated environmental parameters to minimize deployment cost without compromising data resolution. PMID:29271880
Valuing Equal Protection in Aviation Security Screening.
Nguyen, Kenneth D; Rosoff, Heather; John, Richard S
2017-12-01
The growing number of anti-terrorism policies has elevated public concerns about discrimination. Within the context of airport security screening, the current study examines how American travelers value the principle of equal protection by quantifying the "equity premium" that they are willing to sacrifice to avoid screening procedures that result in differential treatments. In addition, we applied the notion of procedural justice to explore the effect of alternative selective screening procedures on the value of equal protection. Two-hundred and twenty-two respondents were randomly assigned to one of three selective screening procedures: (1) randomly, (2) using behavioral indicators, or (3) based on demographic characteristics. They were asked to choose between airlines using either an equal or a discriminatory screening procedure. While the former requires all passengers to be screened in the same manner, the latter mandates all passengers undergo a quick primary screening and, in addition, some passengers are selected for a secondary screening based on a predetermined selection criterion. Equity premiums were quantified in terms of monetary cost, wait time, convenience, and safety compromise. Results show that equity premiums varied greatly across respondents, with many indicating little willingness to sacrifice to avoid inequitable screening, and a smaller minority willing to sacrifice anything to avoid the discriminatory screening. The selective screening manipulation was effective in that equity premiums were greater under selection by demographic characteristics compared to the other two procedures. © 2017 Society for Risk Analysis.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sheng, Y; Li, T; Yoo, S
2016-06-15
Purpose: To enable near-real-time (<20sec) and interactive planning without compromising quality for whole breast RT treatment planning using tangential fields. Methods: Whole breast RT plans from 20 patients treated with single energy (SE, 6MV, 10 patients) or mixed energy (ME, 6/15MV, 10 patients) were randomly selected for model training. Additional 20 cases were used as validation cohort. The planning process for a new case consists of three fully automated steps:1. Energy Selection. A classification model automatically selects energy level. To build the energy selection model, principle component analysis (PCA) was applied to the digital reconstructed radiographs (DRRs) of training casesmore » to extract anatomy-energy relationship.2. Fluence Estimation. Once energy is selected, a random forest (RF) model generates the initial fluence. This model summarizes the relationship between patient anatomy’s shape based features and the output fluence. 3. Fluence Fine-tuning. This step balances the overall dose contribution throughout the whole breast tissue by automatically selecting reference points and applying centrality correction. Fine-tuning works at beamlet-level until the dose distribution meets clinical objectives. Prior to finalization, physicians can also make patient-specific trade-offs between target coverage and high-dose volumes.The proposed method was validated by comparing auto-plans with manually generated clinical-plans using Wilcoxon Signed-Rank test. Results: In 19/20 cases the model suggested the same energy combination as clinical-plans. The target volume coverage V100% was 78.1±4.7% for auto-plans, and 79.3±4.8% for clinical-plans (p=0.12). Volumes receiving 105% Rx were 69.2±78.0cc for auto-plans compared to 83.9±87.2cc for clinical-plans (p=0.13). The mean V10Gy, V20Gy of the ipsilateral lung was 24.4±6.7%, 18.6±6.0% for auto plans and 24.6±6.7%, 18.9±6.1% for clinical-plans (p=0.04, <0.001). Total computational time for auto-plans was < 20s. Conclusion: We developed an automated method that generates breast radiotherapy plans with accurate energy selection, similar target volume coverage, reduced hotspot volumes, and significant reduction in planning time, allowing for near-real-time planning.« less
Random walks on activity-driven networks with attractiveness
NASA Astrophysics Data System (ADS)
Alessandretti, Laura; Sun, Kaiyuan; Baronchelli, Andrea; Perra, Nicola
2017-05-01
Virtually all real-world networks are dynamical entities. In social networks, the propensity of nodes to engage in social interactions (activity) and their chances to be selected by active nodes (attractiveness) are heterogeneously distributed. Here, we present a time-varying network model where each node and the dynamical formation of ties are characterized by these two features. We study how these properties affect random-walk processes unfolding on the network when the time scales describing the process and the network evolution are comparable. We derive analytical solutions for the stationary state and the mean first-passage time of the process, and we study cases informed by empirical observations of social networks. Our work shows that previously disregarded properties of real social systems, such as heterogeneous distributions of activity and attractiveness as well as the correlations between them, substantially affect the dynamical process unfolding on the network.
The clinical evaluation of platelet-rich plasma on free gingival graft's donor site wound healing.
Samani, Mahmoud Khosravi; Saberi, Bardia Vadiati; Ali Tabatabaei, S M; Moghadam, Mahdjoube Goldani
2017-01-01
It has been proved that platelet-rich plasma (PRP) can promote wound healing. In this way, PRP can be advantageous in periodontal plastic surgeries, free gingival graft (FGG) being one such surgery. In this randomized split-mouth controlled trial, 10 patients who needed bilateral FGG were selected, and two donor sites were randomly assigned to experience either natural healing or healing-assisted with PRP. The outcome was assessed based on the comparison of the extent of wound closure, Manchester scale, Landry healing scale, visual analog scale, and tissue thickness between the study groups at different time intervals. Repeated measurements of analysis of variance and paired t -test were used. Statistical significance was P ≤ 0.05. Significant differences between the study groups and also across different time intervals were seen in all parameters except for the changes in tissue thickness. PRP accelerates the healing process of wounds and reduces the healing time.
An Overview of Randomization and Minimization Programs for Randomized Clinical Trials
Saghaei, Mahmoud
2011-01-01
Randomization is an essential component of sound clinical trials, which prevents selection biases and helps in blinding the allocations. Randomization is a process by which subsequent subjects are enrolled into trial groups only by chance, which essentially eliminates selection biases. A serious consequence of randomization is severe imbalance among the treatment groups with respect to some prognostic factors, which invalidate the trial results or necessitate complex and usually unreliable secondary analysis to eradicate the source of imbalances. Minimization on the other hand tends to allocate in such a way as to minimize the differences among groups, with respect to prognostic factors. Pure minimization is therefore completely deterministic, that is, one can predict the allocation of the next subject by knowing the factor levels of a previously enrolled subject and having the properties of the next subject. To eliminate the predictability of randomization, it is necessary to include some elements of randomness in the minimization algorithms. In this article brief descriptions of randomization and minimization are presented followed by introducing selected randomization and minimization programs. PMID:22606659
Selecting Random Distributed Elements for HIFU using Genetic Algorithm
NASA Astrophysics Data System (ADS)
Zhou, Yufeng
2011-09-01
As an effective and noninvasive therapeutic modality for tumor treatment, high-intensity focused ultrasound (HIFU) has attracted attention from both physicians and patients. New generations of HIFU systems with the ability to electrically steer the HIFU focus using phased array transducers have been under development. The presence of side and grating lobes may cause undesired thermal accumulation at the interface of the coupling medium (i.e. water) and skin, or in the intervening tissue. Although sparse randomly distributed piston elements could reduce the amplitude of grating lobes, there are theoretically no grating lobes with the use of concave elements in the new phased array HIFU. A new HIFU transmission strategy is proposed in this study, firing a number of but not all elements for a certain period and then changing to another group for the next firing sequence. The advantages are: 1) the asymmetric position of active elements may reduce the side lobes, and 2) each element has some resting time during the entire HIFU ablation (up to several hours for some clinical applications) so that the decreasing efficiency of the transducer due to thermal accumulation is minimized. Genetic algorithm was used for selecting randomly distributed elements in a HIFU array. Amplitudes of the first side lobes at the focal plane were used as the fitness value in the optimization. Overall, it is suggested that the proposed new strategy could reduce the side lobe and the consequent side-effects, and the genetic algorithm is effective in selecting those randomly distributed elements in a HIFU array.
Single-mode SOA-based 1kHz-linewidth dual-wavelength random fiber laser.
Xu, Yanping; Zhang, Liang; Chen, Liang; Bao, Xiaoyi
2017-07-10
Narrow-linewidth multi-wavelength fiber lasers are of significant interests for fiber-optic sensors, spectroscopy, optical communications, and microwave generation. A novel narrow-linewidth dual-wavelength random fiber laser with single-mode operation, based on the semiconductor optical amplifier (SOA) gain, is achieved in this work for the first time, to the best of our knowledge. A simplified theoretical model is established to characterize such kind of random fiber laser. The inhomogeneous gain in SOA mitigates the mode competition significantly and alleviates the laser instability, which are frequently encountered in multi-wavelength fiber lasers with Erbium-doped fiber gain. The enhanced random distributed feedback from a 5km non-uniform fiber provides coherent feedback, acting as mode selection element to ensure single-mode operation with narrow linewidth of ~1kHz. The laser noises are also comprehensively investigated and studied, showing the improvements of the proposed random fiber laser with suppressed intensity and frequency noises.
Extended observability of linear time-invariant systems under recurrent loss of output data
NASA Technical Reports Server (NTRS)
Luck, Rogelio; Ray, Asok; Halevi, Yoram
1989-01-01
Recurrent loss of sensor data in integrated control systems of an advanced aircraft may occur under different operating conditions that include detected frame errors and queue saturation in computer networks, and bad data suppression in signal processing. This paper presents an extension of the concept of observability based on a set of randomly selected nonconsecutive outputs in finite-dimensional, linear, time-invariant systems. Conditions for testing extended observability have been established.
Cohen-Khait, Ruth; Schreiber, Gideon
2016-01-01
Protein–protein interactions occur via well-defined interfaces on the protein surface. Whereas the location of homologous interfaces is conserved, their composition varies, suggesting that multiple solutions may support high-affinity binding. In this study, we examined the plasticity of the interface of TEM1 β-lactamase with its protein inhibitor BLIP by low-stringency selection of a random TEM1 library using yeast surface display. Our results show that most interfacial residues could be mutated without a loss in binding affinity, protein stability, or enzymatic activity, suggesting plasticity in the interface composition supporting high-affinity binding. Interestingly, many of the selected mutations promoted faster association. Further selection for faster binders was achieved by drastically decreasing the library–ligand incubation time to 30 s. Preequilibrium selection as suggested here is a novel methodology for specifically selecting faster-associating protein complexes. PMID:27956635
Temporally selective attention supports speech processing in 3- to 5-year-old children.
Astheimer, Lori B; Sanders, Lisa D
2012-01-01
Recent event-related potential (ERP) evidence demonstrates that adults employ temporally selective attention to preferentially process the initial portions of words in continuous speech. Doing so is an effective listening strategy since word-initial segments are highly informative. Although the development of this process remains unexplored, directing attention to word onsets may be important for speech processing in young children who would otherwise be overwhelmed by the rapidly changing acoustic signals that constitute speech. We examined the use of temporally selective attention in 3- to 5-year-old children listening to stories by comparing ERPs elicited by attention probes presented at four acoustically matched times relative to word onsets: concurrently with a word onset, 100 ms before, 100 ms after, and at random control times. By 80 ms, probes presented at and after word onsets elicited a larger negativity than probes presented before word onsets or at control times. The latency and distribution of this effect is similar to temporally and spatially selective attention effects measured in adults and, despite differences in polarity, spatially selective attention effects measured in children. These results indicate that, like adults, preschool aged children modulate temporally selective attention to preferentially process the initial portions of words in continuous speech. Copyright © 2011 Elsevier Ltd. All rights reserved.
Piecewise SALT sampling for estimating suspended sediment yields
Robert B. Thomas
1989-01-01
A probability sampling method called SALT (Selection At List Time) has been developed for collecting and summarizing data on delivery of suspended sediment in rivers. It is based on sampling and estimating yield using a suspended-sediment rating curve for high discharges and simple random sampling for low flows. The method gives unbiased estimates of total yield and...
The Factorial Survey: Design Selection and its Impact on Reliability and Internal Validity
ERIC Educational Resources Information Center
Dülmer, Hermann
2016-01-01
The factorial survey is an experimental design consisting of varying situations (vignettes) that have to be judged by respondents. For more complex research questions, it quickly becomes impossible for an individual respondent to judge all vignettes. To overcome this problem, random designs are recommended most of the time, whereas quota designs…
Effects of Female Education on Economic Growth: A Cross Country Empirical Study
ERIC Educational Resources Information Center
Oztunc, Hakan; Oo, Zar Chi; Serin, Zehra Vildan
2015-01-01
This study examines the extent to which women's education affects long-term economic growth in the Asia Pacific region. It focuses on the time period between 1990 and 2010, using data collected in randomly selected Asia Pacific countries: Bangladesh, Cambodia, China, India, Indonesia, Lao PDR, Malaysia, Myanmar, Philippines, Thailand, and Vietnam.…
Need to Improve Efficiency of Reserve Training. Report to the Congress.
ERIC Educational Resources Information Center
Comptroller General of the U.S., Washington, DC.
The report discusses the need to vary the training of Reserve and Guard units by skill and readiness requirements and to make more efficient use of training time. It contains recommendations to the Secretaries of Defense, Transportation, Army, Navy, and Air Force. The review was based on questionnaires mailed to 2,209 randomly selected reservists…
An automatic camera device for measuring waterfowl use
Cowardin, L.M.; Ashe, J.E.
1965-01-01
A Yashica Sequelle camera was modified and equipped with a timing device so that it would take pictures automatically at 15-minute intervals. Several of these cameras were used to photograph randomly selected quadrats located in different marsh habitats. The number of birds photographed in the different areas was used as an index of waterfowl use.
Boone-Heinonen, Janne; Guilkey, David K; Evenson, Kelly R; Gordon-Larsen, Penny
2010-10-04
Built environment research is dominated by cross-sectional designs, which are particularly vulnerable to residential self-selection bias resulting from health-related attitudes, neighborhood preferences, or other unmeasured characteristics related to both neighborhood choice and health-related outcomes. We used cohort data from the National Longitudinal Study of Adolescent Health (United States; Wave I, 1994-95; Wave III, 2001-02; n = 12,701) and a time-varying geographic information system. Longitudinal relationships between moderate to vigorous physical activity (MVPA) bouts and built and socioeconomic environment measures (landcover diversity, pay and public physical activity facilities per 10,000 population, street connectivity, median household income, and crime rate) from adolescence to young adulthood were estimated using random effects models (biased by unmeasured confounders) and fixed effects models (within-person estimator, which adjusts for unmeasured confounders that are stable over time). Random effects models yielded null associations except for negative crime-MVPA associations [coefficient (95% CI): -0.056 (-0.083, -0.029) in males, -0.061 (-0.090, -0.033) in females]. After controlling for measured and time invariant unmeasured characteristics using within-person estimators, MVPA was higher with greater physical activity pay facilities in males [coefficient (95% CI): 0.024 (0.006, 0.042)], and lower with higher crime rates in males [coefficient (95% CI): -0.107 (-0.140, -0.075)] and females [coefficient (95% CI): -0.046 (-0.083, -0.009)]. Other associations were null or in the counter-intuitive direction. Comparison of within-person estimates to estimates unadjusted for unmeasured characteristics suggest that residential self-selection can bias associations toward the null, as opposed to its typical characterization as a positive confounder. Differential environment-MVPA associations by residential relocation suggest that studies examining changes following residential relocation may be vulnerable to selection bias. The authors discuss complexities of adjusting for residential self-selection and residential relocation, particularly during the adolescent to young adult transition.
The RANDOM computer program: A linear congruential random number generator
NASA Technical Reports Server (NTRS)
Miles, R. F., Jr.
1986-01-01
The RANDOM Computer Program is a FORTRAN program for generating random number sequences and testing linear congruential random number generators (LCGs). The linear congruential form of random number generator is discussed, and the selection of parameters of an LCG for a microcomputer described. This document describes the following: (1) The RANDOM Computer Program; (2) RANDOM.MOD, the computer code needed to implement an LCG in a FORTRAN program; and (3) The RANCYCLE and the ARITH Computer Programs that provide computational assistance in the selection of parameters for an LCG. The RANDOM, RANCYCLE, and ARITH Computer Programs are written in Microsoft FORTRAN for the IBM PC microcomputer and its compatibles. With only minor modifications, the RANDOM Computer Program and its LCG can be run on most micromputers or mainframe computers.
Finite-time scaling at the Anderson transition for vibrations in solids
NASA Astrophysics Data System (ADS)
Beltukov, Y. M.; Skipetrov, S. E.
2017-11-01
A model in which a three-dimensional elastic medium is represented by a network of identical masses connected by springs of random strengths and allowed to vibrate only along a selected axis of the reference frame exhibits an Anderson localization transition. To study this transition, we assume that the dynamical matrix of the network is given by a product of a sparse random matrix with real, independent, Gaussian-distributed nonzero entries and its transpose. A finite-time scaling analysis of the system's response to an initial excitation allows us to estimate the critical parameters of the localization transition. The critical exponent is found to be ν =1.57 ±0.02 , in agreement with previous studies of the Anderson transition belonging to the three-dimensional orthogonal universality class.
Maessen, J G; Phelps, B; Dekker, A L A J; Dijkman, B
2004-05-01
To optimize resynchronization in biventricular pacing with epicardial leads, mapping to determine the best pacing site, is a prerequisite. A port access surgical mapping technique was developed that allowed multiple pace site selection and reproducible lead evaluation and implantation. Pressure-volume loops analysis was used for real time guidance in targeting epicardial lead placement. Even the smallest changes in lead position revealed significantly different functional results. Optimizing the pacing site with this technique allowed functional improvement up to 40% versus random pace site selection.
Pareto genealogies arising from a Poisson branching evolution model with selection.
Huillet, Thierry E
2014-02-01
We study a class of coalescents derived from a sampling procedure out of N i.i.d. Pareto(α) random variables, normalized by their sum, including β-size-biasing on total length effects (β < α). Depending on the range of α we derive the large N limit coalescents structure, leading either to a discrete-time Poisson-Dirichlet (α, -β) Ξ-coalescent (α ε[0, 1)), or to a family of continuous-time Beta (2 - α, α - β)Λ-coalescents (α ε[1, 2)), or to the Kingman coalescent (α ≥ 2). We indicate that this class of coalescent processes (and their scaling limits) may be viewed as the genealogical processes of some forward in time evolving branching population models including selection effects. In such constant-size population models, the reproduction step, which is based on a fitness-dependent Poisson Point Process with scaling power-law(α) intensity, is coupled to a selection step consisting of sorting out the N fittest individuals issued from the reproduction step.
Group Counseling With Emotionally Disturbed School Children in Taiwan.
ERIC Educational Resources Information Center
Chiu, Peter
The application of group counseling to emotionally disturbed school children in Chinese culture was examined. Two junior high schools located in Tao-Yuan Province were randomly selected with two eighth-grade classes randomly selected from each school. Ten emotionally disturbed students were chosen from each class and randomly assigned to two…
Sample Selection in Randomized Experiments: A New Method Using Propensity Score Stratified Sampling
ERIC Educational Resources Information Center
Tipton, Elizabeth; Hedges, Larry; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Caverly, Sarah
2014-01-01
Randomized experiments are often seen as the "gold standard" for causal research. Despite the fact that experiments use random assignment to treatment conditions, units are seldom selected into the experiment using probability sampling. Very little research on experimental design has focused on how to make generalizations to well-defined…
Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration
Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng
2012-01-01
In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969
Rotman, B. L.; Sullivan, A. N.; McDonald, T.; DeSmedt, P.; Goodnature, D.; Higgins, M.; Suermondt, H. J.; Young, C. Y.; Owens, D. K.
1995-01-01
We are performing a randomized, controlled trial of a Physician's Workstation (PWS), an ambulatory care information system, developed for use in the General Medical Clinic (GMC) of the Palo Alto VA. Goals for the project include selecting appropriate outcome variables and developing a statistically powerful experimental design with a limited number of subjects. As PWS provides real-time drug-ordering advice, we retrospectively examined drug costs and drug-drug interactions in order to select outcome variables sensitive to our short-term intervention as well as to estimate the statistical efficiency of alternative design possibilities. Drug cost data revealed the mean daily cost per physician per patient was 99.3 cents +/- 13.4 cents, with a range from 0.77 cent to 1.37 cents. The rate of major interactions per prescription for each physician was 2.9% +/- 1%, with a range from 1.5% to 4.8%. Based on these baseline analyses, we selected a two-period parallel design for the evaluation, which maximized statistical power while minimizing sources of bias. PMID:8563376
Menon, Venu; Pearte, Camille A.; Buller, Christopher E.; Steg, Ph.Gabriel; Forman, Sandra A.; White, Harvey D.; Marino, Paolo N.; Katritsis, Demosthenes G.; Caramori, Paulo; Lasevitch, Ricardo; Loboz-Grudzien, Krystyna; Zurakowski, Aleksander; Lamas, Gervasio A.; Hochman, Judith S.
2009-01-01
Aims The Occluded Artery Trial (OAT) (n = 2201) showed no benefit for routine percutaneous intervention (PCI) (n = 1101) over medical therapy (MED) (n = 1100) on the combined endpoint of death, myocardial infarction (MI), and class IV heart failure (congestive heart failure) in stable post-MI patients with late occluded infarct-related arteries (IRAs). We evaluated the potential for selective benefit with PCI over MED for patients enrolled early in OAT. Methods and results We explored outcomes with PCI over MED in patients randomized to the ≤3 calendar days and ≤7 calendar days post-MI time windows. Earlier, times to randomization in OAT were associated with higher rates of the combined endpoint (adjusted HR 1.04/day: 99% CI 1.01–1.06; P < 0.001). The 48-month event rates for ≤3 days, ≤7 days post-MI enrolled patients were similar for PCI vs. MED for the combined and individual endpoints. There was no interaction between time to randomization defined as a continuous (P = 0.55) or categorical variable with a cut-point of 3 days (P = 0.98) or 7 days (P = 0.64) post-MI and treatment effect. Conclusion Consistent with overall OAT findings, patients enrolled in the ≤3 day and ≤7 day post-MI time windows derived no benefit with PCI over MED with no interaction between time to randomization and treatment effect. Our findings do not support routine PCI of the occluded IRA in trial-eligible patients even in the earliest 24–72 h time window. PMID:19028780
Eighty routes to a ribonucleotide world; dispersion and stringency in the decisive selection.
Yarus, Michael
2018-05-21
We examine the initial emergence of genetics; that is, of an inherited chemical capability. The crucial actors are ribonucleotides, occasionally meeting in a prebiotic landscape. Previous work identified six influential variables during such random ribonucleotide pooling. Geochemical pools can be in periodic danger (e.g., from tides) or constant danger (e.g., from unfavorable weather). Such pools receive Gaussian nucleotide amounts sporadically, at random times, or get varying substrates simultaneously. Pools use cross-templated RNA synthesis (5'-5' product from 5'-3' template) or para-templated (5'-5' product from 5'-5' template) synthesis. Pools can undergo mild or strong selection, and be recently initiated (early) or late in age. Considering > 80 combinations of these variables, selection calculations identify a superior route. Most likely, an early, sporadically fed, cross-templating pool in constant danger, receiving ≥ 1 mM nucleotides while under strong selection for a coenzyme-like product will host selection of the first encoded biochemical functions. Predominantly templated products emerge from a critical event, the starting bloc selection, which exploits inevitable differences among early pools. Favorable selection has a simple rationale; it is increased by product dispersion (sd/mean), by selection intensity (mild or strong), or by combining these factors as stringency, reciprocal fraction of pools selected (1/sfsel). To summarize: chance utility, acting via a preference for disperse, templated coenzyme-like dinucleotides, uses stringent starting bloc selection to quickly establish majority encoded/genetic expression. Despite its computational origin, starting bloc selection is largely independent of specialized assumptions. This ribodinucleotide route to inheritance may also have facilitated 5'-3' chemical RNA replication. Published by Cold Spring Harbor Laboratory Press for the RNA Society.
Zhou, Fuqun; Zhang, Aining
2016-01-01
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2–3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests’ features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data. PMID:27792152
Patel, C G; Kornhauser, D; Vachharajani, N; Komoroski, B; Brenner, E; Handschuh del Corral, M; Li, L; Boulton, D W
2011-07-01
To evaluate the pharmacokinetic interactions of the potent, selective, dipeptidyl peptidase-4 inhibitor, saxagliptin, in combination with metformin, glyburide or pioglitazone. To assess the effect of co-administration of saxagliptin with oral antidiabetic drugs (OADs) on the pharmacokinetics and tolerability of saxagliptin, 5-hydroxy saxagliptin, metformin, glyburide, pioglitazone and hydroxy-pioglitazone, analyses of variance were performed on maximum (peak) plasma drug concentration (C(max)), area under the plasma concentration-time curve from time zero to infinity (AUC(∞)) [saxagliptin + metformin (study 1) and saxagliptin + glyburide (study 2)] and area under the concentration-time curve from time 0 to time t (AUC) [saxagliptin + pioglitazone (study 3)] for each analyte in the respective studies. Studies 1 and 2 were open-label, randomized, three-period, three-treatment, crossover studies, and study 3 was an open-label, non-randomized, sequential study in healthy subjects. Co-administration of saxagliptin with metformin, glyburide or pioglitazone did not result in clinically meaningful alterations in the pharmacokinetics of saxagliptin or its metabolite, 5-hydroxy saxagliptin. Following co-administration of saxagliptin, there were no clinically meaningful alterations in the pharmacokinetics of metformin, glyburide, pioglitazone or hydroxy-pioglitazone. Saxagliptin was generally safe and well tolerated when administered alone or in combination with metformin, glyburide or pioglitazone. Saxagliptin can be co-administered with metformin, glyburide or pioglitazone without a need for dose adjustment of either saxagliptin or these OADs. © 2011 Blackwell Publishing Ltd.
Zhou, Fuqun; Zhang, Aining
2016-10-25
Nowadays, various time-series Earth Observation data with multiple bands are freely available, such as Moderate Resolution Imaging Spectroradiometer (MODIS) datasets including 8-day composites from NASA, and 10-day composites from the Canada Centre for Remote Sensing (CCRS). It is challenging to efficiently use these time-series MODIS datasets for long-term environmental monitoring due to their vast volume and information redundancy. This challenge will be greater when Sentinel 2-3 data become available. Another challenge that researchers face is the lack of in-situ data for supervised modelling, especially for time-series data analysis. In this study, we attempt to tackle the two important issues with a case study of land cover mapping using CCRS 10-day MODIS composites with the help of Random Forests' features: variable importance, outlier identification. The variable importance feature is used to analyze and select optimal subsets of time-series MODIS imagery for efficient land cover mapping, and the outlier identification feature is utilized for transferring sample data available from one year to an adjacent year for supervised classification modelling. The results of the case study of agricultural land cover classification at a regional scale show that using only about a half of the variables we can achieve land cover classification accuracy close to that generated using the full dataset. The proposed simple but effective solution of sample transferring could make supervised modelling possible for applications lacking sample data.
Design of a mobile brain computer interface-based smart multimedia controller.
Tseng, Kevin C; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh
2015-03-06
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user's physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user's physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user's EEG feature and select music according his/her state. The relationship between the user's state and music sorted by listener's preference was also examined in this study. The experimental results show that real-time music biofeedback according a user's EEG feature may positively improve the user's attention state.
Evolution of tag-based cooperation on Erdős-Rényi random graphs
NASA Astrophysics Data System (ADS)
Lima, F. W. S.; Hadzibeganovic, Tarik; Stauffer, Dietrich
2014-12-01
Here, we study an agent-based model of the evolution of tag-mediated cooperation on Erdős-Rényi random graphs. In our model, agents with heritable phenotypic traits play pairwise Prisoner's Dilemma-like games and follow one of the four possible strategies: Ethnocentric, altruistic, egoistic and cosmopolitan. Ethnocentric and cosmopolitan strategies are conditional, i.e. their selection depends upon the shared phenotypic similarity among interacting agents. The remaining two strategies are always unconditional, meaning that egoists always defect while altruists always cooperate. Our simulations revealed that ethnocentrism can win in both early and later evolutionary stages on directed random graphs when reproduction of artificial agents was asexual; however, under the sexual mode of reproduction on a directed random graph, we found that altruists dominate initially for a rather short period of time, whereas ethnocentrics and egoists suppress other strategists and compete for dominance in the intermediate and later evolutionary stages. Among our results, we also find surprisingly regular oscillations which are not damped in the course of time even after half a million Monte Carlo steps. Unlike most previous studies, our findings highlight conditions under which ethnocentrism is less stable or suppressed by other competing strategies.
Early Visual Cortex Dynamics during Top-Down Modulated Shifts of Feature-Selective Attention.
Müller, Matthias M; Trautmann, Mireille; Keitel, Christian
2016-04-01
Shifting attention from one color to another color or from color to another feature dimension such as shape or orientation is imperative when searching for a certain object in a cluttered scene. Most attention models that emphasize feature-based selection implicitly assume that all shifts in feature-selective attention underlie identical temporal dynamics. Here, we recorded time courses of behavioral data and steady-state visual evoked potentials (SSVEPs), an objective electrophysiological measure of neural dynamics in early visual cortex to investigate temporal dynamics when participants shifted attention from color or orientation toward color or orientation, respectively. SSVEPs were elicited by four random dot kinematograms that flickered at different frequencies. Each random dot kinematogram was composed of dashes that uniquely combined two features from the dimensions color (red or blue) and orientation (slash or backslash). Participants were cued to attend to one feature (such as color or orientation) and respond to coherent motion targets of the to-be-attended feature. We found that shifts toward color occurred earlier after the shifting cue compared with shifts toward orientation, regardless of the original feature (i.e., color or orientation). This was paralleled in SSVEP amplitude modulations as well as in the time course of behavioral data. Overall, our results suggest different neural dynamics during shifts of attention from color and orientation and the respective shifting destinations, namely, either toward color or toward orientation.
Need for specific sugar-sweetened beverage lessons for fourth- and fifth-graders.
Bea, Jennifer W; Jacobs, Laurel; Waits, Juanita; Hartz, Vern; Martinez, Stephanie H; Standfast, Rebecca D; Farrell, Vanessa A; Bawden, Margine; Whitmer, Evelyn; Misner, Scottie
2015-01-01
Consumption of sugar-sweetened beverages (SSB) is linked to obesity. The authors hypothesized that school-based nutrition education would decrease SSB consumption. Self-selected interventional cohort with random selection for pre and post measurements. Arizona Supplemental Nutrition Assistance Program-Education Program-eligible schools. Randomly selected (9%) fourth- and fifth-grade classroom students. The University of Arizona Nutrition Network provided general nutrition education training and materials to teachers, to be delivered to their students. The University of Arizona Nutrition Network administered behavioral questionnaires to students in both fall and spring. Change in SSB consumption. Descriptive statistics were computed for student demographics and beverage consumption on the day before testing. Paired t tests evaluated change in classroom averages. Linear regression assessed potential correlates of SSB consumption. Fall mean SSB consumption was 1.1 (± 0.2) times; mean milk and water intake were 1.6 (± 0.2) and 5.2 (± 0.7) times, respectively. Beverage consumption increased (3.2%) in springtime, with increased SSBs (14.4%) accounting for the majority (P = .006). Change in SSB consumption was negatively associated with baseline SSB and water consumption but positively associated with baseline milk fat (P ≤ .05). The results suggest the need for beverage-specific education to encourage children to consume more healthful beverages in warmer weather. Copyright © 2015 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
Engen, Steinar; Saether, Bernt-Erik
2014-03-01
We analyze the stochastic components of the Robertson-Price equation for the evolution of quantitative characters that enables decomposition of the selection differential into components due to demographic and environmental stochasticity. We show how these two types of stochasticity affect the evolution of multivariate quantitative characters by defining demographic and environmental variances as components of individual fitness. The exact covariance formula for selection is decomposed into three components, the deterministic mean value, as well as stochastic demographic and environmental components. We show that demographic and environmental stochasticity generate random genetic drift and fluctuating selection, respectively. This provides a common theoretical framework for linking ecological and evolutionary processes. Demographic stochasticity can cause random variation in selection differentials independent of fluctuating selection caused by environmental variation. We use this model of selection to illustrate that the effect on the expected selection differential of random variation in individual fitness is dependent on population size, and that the strength of fluctuating selection is affected by how environmental variation affects the covariance in Malthusian fitness between individuals with different phenotypes. Thus, our approach enables us to partition out the effects of fluctuating selection from the effects of selection due to random variation in individual fitness caused by demographic stochasticity. © 2013 The Author(s). Evolution © 2013 The Society for the Study of Evolution.
Optimal timing in biological processes
Williams, B.K.; Nichols, J.D.
1984-01-01
A general approach for obtaining solutions to a class of biological optimization problems is provided. The general problem is one of determining the appropriate time to take some action, when the action can be taken only once during some finite time frame. The approach can also be extended to cover a number of other problems involving animal choice (e.g., mate selection, habitat selection). Returns (assumed to index fitness) are treated as random variables with time-specific distributions, and can be either observable or unobservable at the time action is taken. In the case of unobservable returns, the organism is assumed to base decisions on some ancillary variable that is associated with returns. Optimal policies are derived for both situations and their properties are discussed. Various extensions are also considered, including objective functions based on functions of returns other than the mean, nonmonotonic relationships between the observable variable and returns; possible death of the organism before action is taken; and discounting of future returns. A general feature of the optimal solutions for many of these problems is that an organism should be very selective (i.e., should act only when returns or expected returns are relatively high) at the beginning of the time frame and should become less and less selective as time progresses. An example of the application of optimal timing to a problem involving the timing of bird migration is discussed, and a number of other examples for which the approach is applicable are described.
NASA Astrophysics Data System (ADS)
Eckert, Sandra
2016-08-01
The SPOT-5 Take 5 campaign provided SPOT time series data of an unprecedented spatial and temporal resolution. We analysed 29 scenes acquired between May and September 2015 of a semi-arid region in the foothills of Mount Kenya, with two aims: first, to distinguish rainfed from irrigated cropland and cropland from natural vegetation covers, which show similar reflectance patterns; and second, to identify individual crop types. We tested several input data sets in different combinations: the spectral bands and the normalized difference vegetation index (NDVI) time series, principal components of NDVI time series, and selected NDVI time series statistics. For the classification we used random forests (RF). In the test differentiating rainfed cropland, irrigated cropland, and natural vegetation covers, the best classification accuracies were achieved using spectral bands. For the differentiation of crop types, we analysed the phenology of selected crop types based on NDVI time series. First results are promising.
Effectiveness of nursing management information systems: a systematic review.
Choi, Mona; Yang, You Lee; Lee, Sun-Mi
2014-10-01
The purpose of this study was to review evaluation studies of nursing management information systems (NMISs) and their outcome measures to examine system effectiveness. For the systematic review, a literature search of the PubMed, CINAHL, Embase, and Cochrane Library databases was conducted to retrieve original articles published between 1970 and 2014. Medical Subject Headings (MeSH) terms included informatics, medical informatics, nursing informatics, medical informatics application, and management information systems for information systems and evaluation studies and nursing evaluation research for evaluation research. Additionally, manag(*) and admin(*), and nurs(*) were combined. Title, abstract, and full-text reviews were completed by two reviewers. And then, year, author, type of management system, study purpose, study design, data source, system users, study subjects, and outcomes were extracted from the selected articles. The quality and risk of bias of the studies that were finally selected were assessed with the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS) criteria. Out of the 2,257 retrieved articles, a total of six articles were selected. These included two scheduling programs, two nursing cost-related programs, and two patient care management programs. For the outcome measurements, usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns were included. User satisfaction, time saving, and usefulness mostly showed positive findings. The study results suggest that NMISs were effective in time saving and useful in nursing care. Because there was a lack of quality in the reviewed studies, well-designed research, such as randomized controlled trials, should be conducted to more objectively evaluate the effectiveness of NMISs.
The Coalescent Process in Models with Selection
Kaplan, N. L.; Darden, T.; Hudson, R. R.
1988-01-01
Statistical properties of the process describing the genealogical history of a random sample of genes are obtained for a class of population genetics models with selection. For models with selection, in contrast to models without selection, the distribution of this process, the coalescent process, depends on the distribution of the frequencies of alleles in the ancestral generations. If the ancestral frequency process can be approximated by a diffusion, then the mean and the variance of the number of segregating sites due to selectively neutral mutations in random samples can be numerically calculated. The calculations are greatly simplified if the frequencies of the alleles are tightly regulated. If the mutation rates between alleles maintained by balancing selection are low, then the number of selectively neutral segregating sites in a random sample of genes is expected to substantially exceed the number predicted under a neutral model. PMID:3066685
Petersen, James H.; DeAngelis, Donald L.
1992-01-01
The behavior of individual northern squawfish (Ptychocheilus oregonensis) preying on juvenile salmonids was modeled to address questions about capture rate and the timing of prey captures (random versus contagious). Prey density, predator weight, prey weight, temperature, and diel feeding pattern were first incorporated into predation equations analogous to Holling Type 2 and Type 3 functional response models. Type 2 and Type 3 equations fit field data from the Columbia River equally well, and both models predicted predation rates on five of seven independent dates. Selecting a functional response type may be complicated by variable predation rates, analytical methods, and assumptions of the model equations. Using the Type 2 functional response, random versus contagious timing of prey capture was tested using two related models. ln the simpler model, salmon captures were assumed to be controlled by a Poisson renewal process; in the second model, several salmon captures were assumed to occur during brief "feeding bouts", modeled with a compound Poisson process. Salmon captures by individual northern squawfish were clustered through time, rather than random, based on comparison of model simulations and field data. The contagious-feeding result suggests that salmonids may be encountered as patches or schools in the river.
Chen, Zhaoxue; Yu, Haizhong; Chen, Hao
2013-12-01
To solve the problem of traditional K-means clustering in which initial clustering centers are selected randomly, we proposed a new K-means segmentation algorithm based on robustly selecting 'peaks' standing for White Matter, Gray Matter and Cerebrospinal Fluid in multi-peaks gray histogram of MRI brain image. The new algorithm takes gray value of selected histogram 'peaks' as the initial K-means clustering center and can segment the MRI brain image into three parts of tissue more effectively, accurately, steadily and successfully. Massive experiments have proved that the proposed algorithm can overcome many shortcomings caused by traditional K-means clustering method such as low efficiency, veracity, robustness and time consuming. The histogram 'peak' selecting idea of the proposed segmentootion method is of more universal availability.
What is the Optimal Strategy for Adaptive Servo-Ventilation Therapy?
Imamura, Teruhiko; Kinugawa, Koichiro
2018-05-23
Clinical advantages in the adaptive servo-ventilation (ASV) therapy have been reported in selected heart failure patients with/without sleep-disorder breathing, whereas multicenter randomized control trials could not demonstrate such advantages. Considering this discrepancy, optimal patient selection and device setting may be a key for the successful ASV therapy. Hemodynamic and echocardiographic parameters indicating pulmonary congestion such as elevated pulmonary capillary wedge pressure were reported as predictors of good response to ASV therapy. Recently, parameters indicating right ventricular dysfunction also have been reported as good predictors. Optimal device setting with appropriate pressure setting during appropriate time may also be a key. Large-scale prospective trial with optimal patient selection and optimal device setting is warranted.
Effects of Selected Meditative Asanas on Kinaesthetic Perception and Speed of Movement
ERIC Educational Resources Information Center
Singh, Kanwaljeet; Bal, Baljinder S.; Deol, Nishan S.
2009-01-01
Study aim: To assess the effects of selected meditative "asanas" on kinesthetic perception and movement speed. Material and methods: Thirty randomly selected male students aged 18-24 years volunteered to participate in the study. They were randomly assigned into two groups: A (medidative) and B (control). The Nelson's movement speed and…
Model Selection with the Linear Mixed Model for Longitudinal Data
ERIC Educational Resources Information Center
Ryoo, Ji Hoon
2011-01-01
Model building or model selection with linear mixed models (LMMs) is complicated by the presence of both fixed effects and random effects. The fixed effects structure and random effects structure are codependent, so selection of one influences the other. Most presentations of LMM in psychology and education are based on a multilevel or…
Austin, Peter C; Schuster, Tibor; Platt, Robert W
2015-10-15
Estimating statistical power is an important component of the design of both randomized controlled trials (RCTs) and observational studies. Methods for estimating statistical power in RCTs have been well described and can be implemented simply. In observational studies, statistical methods must be used to remove the effects of confounding that can occur due to non-random treatment assignment. Inverse probability of treatment weighting (IPTW) using the propensity score is an attractive method for estimating the effects of treatment using observational data. However, sample size and power calculations have not been adequately described for these methods. We used an extensive series of Monte Carlo simulations to compare the statistical power of an IPTW analysis of an observational study with time-to-event outcomes with that of an analysis of a similarly-structured RCT. We examined the impact of four factors on the statistical power function: number of observed events, prevalence of treatment, the marginal hazard ratio, and the strength of the treatment-selection process. We found that, on average, an IPTW analysis had lower statistical power compared to an analysis of a similarly-structured RCT. The difference in statistical power increased as the magnitude of the treatment-selection model increased. The statistical power of an IPTW analysis tended to be lower than the statistical power of a similarly-structured RCT.
Population differentiation in Pacific salmon: local adaptation, genetic drift, or the environment?
Adkison, Milo D.
1995-01-01
Morphological, behavioral, and life-history differences between Pacific salmon (Oncorhynchus spp.) populations are commonly thought to reflect local adaptation, and it is likewise common to assume that salmon populations separated by small distances are locally adapted. Two alternatives to local adaptation exist: random genetic differentiation owing to genetic drift and founder events, and genetic homogeneity among populations, in which differences reflect differential trait expression in differing environments. Population genetics theory and simulations suggest that both alternatives are possible. With selectively neutral alleles, genetic drift can result in random differentiation despite many strays per generation. Even weak selection can prevent genetic drift in stable populations; however, founder effects can result in random differentiation despite selective pressures. Overlapping generations reduce the potential for random differentiation. Genetic homogeneity can occur despite differences in selective regimes when straying rates are high. In sum, localized differences in selection should not always result in local adaptation. Local adaptation is favored when population sizes are large and stable, selection is consistent over large areas, selective diffeentials are large, and straying rates are neither too high nor too low. Consideration of alternatives to local adaptation would improve both biological research and salmon conservation efforts.
ERIC Educational Resources Information Center
von Raffler-Engel, Walburga
A study of randomly selected "Donahue" shows revealed how host Phil Donahue interacts with several parties at one time and how he subordinates various interactions to suit the hierarchy of importance he attributes to each party, with the television viewer being the most important. Donahue organizes his body movement mainly for television…
ERIC Educational Resources Information Center
Francis, Robert M.
A survey of 50 randomly selected, full-time, day students at Herkimer County Community College, New York, was conducted to identify those areas of college governance that students felt they should be involved in and to determine the percentage of students who actually participated in student government. The students were asked to indicate: (1)…
Drinking and Driving among College Students: The Influence of Alcohol-Control Policies
ERIC Educational Resources Information Center
Journal of American College Health, 2005
2005-01-01
Randomly selected full-time college students attending four-year colleges in 39 states completed a questionnaire about alcohol consumption and driving. The results revealed that 29 percent of the students drove after drinking some amount of alcohol 10 percent drove after drinking five or more drinks, and 23 percent rode with a driver who was high…
An Approach to the Analysis of Panel Data: The Watergate Hearings and Political Socialization.
ERIC Educational Resources Information Center
Davis, Dennis K.; Lee, Jae-won
It was the purpose of this study to provide a tool for designing and executing future research on panel data in which relationships between pairs of variables are observed over time so that contingent conditions can be controlled. The 360 subjects were selected from the telephone directory and surveyed at random about their responses to the…
ERIC Educational Resources Information Center
Malone, Molly
2012-01-01
Most middle school students comprehend that organisms have adaptations that enable their survival and that successful adaptations prevail in a population over time. Yet they often miss that those bird beaks, moth-wing colors, or whatever traits are the result of random, normal genetic variations that just happen to confer a negative, neutral, or…
Charting the expansion of strategic exploratory behavior during adolescence.
Somerville, Leah H; Sasse, Stephanie F; Garrad, Megan C; Drysdale, Andrew T; Abi Akar, Nadine; Insel, Catherine; Wilson, Robert C
2017-02-01
Although models of exploratory decision making implicate a suite of strategies that guide the pursuit of information, the developmental emergence of these strategies remains poorly understood. This study takes an interdisciplinary perspective, merging computational decision making and developmental approaches to characterize age-related shifts in exploratory strategy from adolescence to young adulthood. Participants were 149 12-28-year-olds who completed a computational explore-exploit paradigm that manipulated reward value, information value, and decision horizon (i.e., the utility that information holds for future choices). Strategic directed exploration, defined as information seeking selective for long time horizons, emerged during adolescence and maintained its level through early adulthood. This age difference was partially driven by adolescents valuing immediate reward over new information. Strategic random exploration, defined as stochastic choice behavior selective for long time horizons, was invoked at comparable levels over the age range, and predicted individual differences in attitudes toward risk taking in daily life within the adolescent portion of the sample. Collectively, these findings reveal an expansion of the diversity of strategic exploration over development, implicate distinct mechanisms for directed and random exploratory strategies, and suggest novel mechanisms for adolescent-typical shifts in decision making. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Yu, Clare C W; Au, Chun T; Lee, Frank Y F; So, Raymond C H; Wong, John P S; Mak, Gary Y K; Chien, Eric P; McManus, Alison M
2015-09-01
Overweight, obesity, and cardiovascular disease risk factors are prevalent among firefighters in some developed countries. It is unclear whether physical activity and cardiopulmonary fitness reduce cardiovascular disease risk and the cardiovascular workload at work in firefighters. The present study investigated the relationship between leisure-time physical activity, cardiopulmonary fitness, cardiovascular disease risk factors, and cardiovascular workload at work in firefighters in Hong Kong. Male firefighters (n = 387) were randomly selected from serving firefighters in Hong Kong (n = 5,370) for the assessment of cardiovascular disease risk factors (obesity, hypertension, diabetes mellitus, dyslipidemia, smoking, known cardiovascular diseases). One-third (Target Group) were randomly selected for the assessment of off-duty leisure-time physical activity using the short version of the International Physical Activity Questionnaire. Maximal oxygen uptake was assessed, as well as cardiovascular workload using heart rate monitoring for each firefighter for four "normal" 24-hour working shifts and during real-situation simulated scenarios. Overall, 33.9% of the firefighters had at least two cardiovascular disease risk factors. In the Target Group, firefighters who had higher leisure-time physical activity had a lower resting heart rate and a lower average working heart rate, and spent a smaller proportion of time working at a moderate-intensity cardiovascular workload. Firefighters who had moderate aerobic fitness and high leisure-time physical activity had a lower peak working heart rate during the mountain rescue scenario compared with firefighters who had low leisure-time physical activities. Leisure-time physical activity conferred significant benefits during job tasks of moderate cardiovascular workload in firefighters in Hong Kong.
NASA Technical Reports Server (NTRS)
Tomberlin, T. J.
1985-01-01
Research studies of residents' responses to noise consist of interviews with samples of individuals who are drawn from a number of different compact study areas. The statistical techniques developed provide a basis for those sample design decisions. These techniques are suitable for a wide range of sample survey applications. A sample may consist of a random sample of residents selected from a sample of compact study areas, or in a more complex design, of a sample of residents selected from a sample of larger areas (e.g., cities). The techniques may be applied to estimates of the effects on annoyance of noise level, numbers of noise events, the time-of-day of the events, ambient noise levels, or other factors. Methods are provided for determining, in advance, how accurately these effects can be estimated for different sample sizes and study designs. Using a simple cost function, they also provide for optimum allocation of the sample across the stages of the design for estimating these effects. These techniques are developed via a regression model in which the regression coefficients are assumed to be random, with components of variance associated with the various stages of a multi-stage sample design.
Schweizer, Marin L.; Braun, Barbara I.; Milstone, Aaron M.
2016-01-01
Quasi-experimental studies evaluate the association between an intervention and an outcome using experiments in which the intervention is not randomly assigned. Quasi-experimental studies are often used to evaluate rapid responses to outbreaks or other patient safety problems requiring prompt non-randomized interventions. Quasi-experimental studies can be categorized into three major types: interrupted time series designs, designs with control groups, and designs without control groups. This methods paper highlights key considerations for quasi-experimental studies in healthcare epidemiology and antimicrobial stewardship including study design and analytic approaches to avoid selection bias and other common pitfalls of quasi-experimental studies. PMID:27267457
Defining fitness in an uncertain world.
Crewe, Paul; Gratwick, Richard; Grafen, Alan
2018-04-01
The recently elucidated definition of fitness employed by Fisher in his fundamental theorem of natural selection is combined with reproductive values as appropriately defined in the context of both random environments and continuing fluctuations in the distribution over classes in a class-structured population. We obtain astonishingly simple results, generalisations of the Price Equation and the fundamental theorem, that show natural selection acting only through the arithmetic expectation of fitness over all uncertainties, in contrast to previous studies with fluctuating demography, in which natural selection looks rather complicated. Furthermore, our setting permits each class to have its characteristic ploidy, thus covering haploidy, diploidy and haplodiploidy at the same time; and allows arbitrary classes, including continuous variables such as condition. The simplicity is achieved by focussing just on the effects of natural selection on genotype frequencies: while other causes are present in the model, and the effect of natural selection is assessed in their presence, these causes will have their own further effects on genoytpe frequencies that are not assessed here. Also, Fisher's uses of reproductive value are shown to have two ambivalences, and a new axiomatic foundation for reproductive value is endorsed. The results continue the formal darwinism project, and extend support for the individual-as-maximising-agent analogy to finite populations with random environments and fluctuating class-distributions. The model may also lead to improved ways to measure fitness in real populations.
Open-field behavior of house mice selectively bred for high voluntary wheel-running.
Bronikowski, A M; Carter, P A; Swallow, J G; Girard, I A; Rhodes, J S; Garland, T
2001-05-01
Open-field behavioral assays are commonly used to test both locomotor activity and emotionality in rodents. We performed open-field tests on house mice (Mus domesticus) from four replicate lines genetically selected for high voluntary wheel-running for 22 generations and from four replicate random-bred control lines. Individual mice were recorded by video camera for 3 min in a 1-m2 open-field arena on 2 consecutive days. Mice from selected lines showed no statistical differences from control mice with respect to distance traveled, defecation, time spent in the interior, or average distance from the center of the arena during the trial. Thus, we found little evidence that open-field behavior, as traditionally defined, is genetically correlated with wheel-running behavior. This result is a useful converse test of classical studies that report no increased wheel-running in mice selected for increased open-field activity. However, mice from selected lines turned less in their travel paths than did control-line mice, and females from selected lines had slower travel times (longer latencies) to reach the wall. We discuss these results in the context of the historical open-field test and newly defined measures of open-field activity.
Random sampling of elementary flux modes in large-scale metabolic networks.
Machado, Daniel; Soons, Zita; Patil, Kiran Raosaheb; Ferreira, Eugénio C; Rocha, Isabel
2012-09-15
The description of a metabolic network in terms of elementary (flux) modes (EMs) provides an important framework for metabolic pathway analysis. However, their application to large networks has been hampered by the combinatorial explosion in the number of modes. In this work, we develop a method for generating random samples of EMs without computing the whole set. Our algorithm is an adaptation of the canonical basis approach, where we add an additional filtering step which, at each iteration, selects a random subset of the new combinations of modes. In order to obtain an unbiased sample, all candidates are assigned the same probability of getting selected. This approach avoids the exponential growth of the number of modes during computation, thus generating a random sample of the complete set of EMs within reasonable time. We generated samples of different sizes for a metabolic network of Escherichia coli, and observed that they preserve several properties of the full EM set. It is also shown that EM sampling can be used for rational strain design. A well distributed sample, that is representative of the complete set of EMs, should be suitable to most EM-based methods for analysis and optimization of metabolic networks. Source code for a cross-platform implementation in Python is freely available at http://code.google.com/p/emsampler. dmachado@deb.uminho.pt Supplementary data are available at Bioinformatics online.
Pai, Lee-Wen; Li, Tsai-Chung; Hwu, Yueh-Juen; Chang, Shu-Chuan; Chen, Li-Li; Chang, Pi-Ying
2016-03-01
The objective of this study was to systematically review the effectiveness of different types of regular leisure-time physical activities and pooled the effect sizes of those activities on long-term glycemic control in people with type 2 diabetes compared with routine care. This review included randomized controlled trials from 1960 to May 2014. A total of 10 Chinese and English databases were searched, following selection and critical appraisal, 18 randomized controlled trials with 915 participants were included. The standardized mean difference was reported as the summary statistic for the overall effect size in a random effects model. The results indicated yoga was the most effective in lowering glycated haemoglobin A1c (HbA1c) levels. Meta-analysis also revealed that the decrease in HbA1c levels of the subjects who took part in regular leisure-time physical activities was 0.60% more than that of control group participants. A higher frequency of regular leisure-time physical activities was found to be more effective in reducing HbA1c levels. The results of this review provide evidence of the benefits associated with regular leisure-time physical activities compared with routine care for lowering HbA1c levels in people with type 2 diabetes. Copyright © 2016 The Authors. Published by Elsevier Ireland Ltd.. All rights reserved.
DeVoe, Jennifer E; Marino, Miguel; Angier, Heather; O'Malley, Jean P; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J; Bailey, Steffani R; Gallia, Charles; Gold, Rachel
2015-01-01
In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon's randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. To estimate the effect on a child's health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. Oregon Experiment randomized natural experiment assessing the results of Oregon's 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child's Medicaid or Children's Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children's coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. Children's Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent's selection date. In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent's selection compared with children whose parents were not selected (adjusted odds ratio [AOR]=1.18; 95% CI, 1.10-1.27). The effect remained significant during months 7 to 12 (AOR=1.11; 95% CI, 1.03-1.19); months 13 to 18 showed a positive but not significant effect (AOR=1.07; 95% CI, 0.99-1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR=2.37; 95% CI, 2.14-2.64). Children's odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents' access to Medicaid coverage and their children's coverage.
Vu, Michael M. K.; Jameson, Nora E.; Masuda, Stuart J.; Lin, Dana; Larralde-Ridaura, Rosa; Lupták, Andrej
2012-01-01
SUMMARY Aptamers are structured macromolecules in vitro evolved to bind molecular targets, whereas in nature they form the ligand-binding domains of riboswitches. Adenosine aptamers of a single structural family were isolated several times from random pools but they have not been identified in genomic sequences. We used two unbiased methods, structure-based bioinformatics and human genome-based in vitro selection, to identify aptamers that form the same adenosine-binding structure in a bacterium, and several vertebrates, including humans. Two of the human aptamers map to introns of RAB3C and FGD3 genes. The RAB3C aptamer binds ATP with dissociation constants about ten times lower than physiological ATP concentration, while the minimal FGD3 aptamer binds ATP only co-transcriptionally. PMID:23102219
Antihypertensive effect of Iranian Crataegus curvisepala Lind.: a randomized, double-blind study.
Asgary, S; Naderi, G H; Sadeghi, M; Kelishadi, R; Amiri, M
2004-01-01
The aim of the present study was to investigate the potential antihypertensive effects of extracts of the flavonoid-rich Iranian flower, Crataegus curvisepala Lind., a member of the Rosaceae family. The hydroalcoholic extract of the leaves and flowers were studied in a double-blind, placebo-controlled clinical trial to determine its effects. A total of 92 men and women with primary mild hypertension, aged 40-60 years, were selected and divided randomly into two groups, receiving either hydroalcoholic extract of C. curvisepala Lind. or placebo three times daily for more than 4 months. Blood pressure (BP) was measured each month. Statistical analysis was carried out using Student's t-test. The results obtained showed a significant decrease in both systolic and diastolic BP after 3 months (p < 0.05). C. curvisepala has a time-dependent antihypertensive effect.
Random covering of the circle: the configuration-space of the free deposition process
NASA Astrophysics Data System (ADS)
Huillet, Thierry
2003-12-01
Consider a circle of circumference 1. Throw at random n points, sequentially, on this circle and append clockwise an arc (or rod) of length s to each such point. The resulting random set (the free gas of rods) is a collection of a random number of clusters with random sizes. It models a free deposition process on a 1D substrate. For such processes, we shall consider the occurrence times (number of rods) and probabilities, as n grows, of the following configurations: those avoiding rod overlap (the hard-rod gas), those for which the largest gap is smaller than rod length s (the packing gas), those (parking configurations) for which hard rod and packing constraints are both fulfilled and covering configurations. Special attention is paid to the statistical properties of each such (rare) configuration in the asymptotic density domain when ns = rgr, for some finite density rgr of points. Using results from spacings in the random division of the circle, explicit large deviation rate functions can be computed in each case from state equations. Lastly, a process consisting in selecting at random one of these specific equilibrium configurations (called the observable) can be modelled. When particularized to the parking model, this system produces parking configurations differently from Rényi's random sequential adsorption model.
The variability of software scoring of the CDMAM phantom associated with a limited number of images
NASA Astrophysics Data System (ADS)
Yang, Chang-Ying J.; Van Metter, Richard
2007-03-01
Software scoring approaches provide an attractive alternative to human evaluation of CDMAM images from digital mammography systems, particularly for annual quality control testing as recommended by the European Protocol for the Quality Control of the Physical and Technical Aspects of Mammography Screening (EPQCM). Methods for correlating CDCOM-based results with human observer performance have been proposed. A common feature of all methods is the use of a small number (at most eight) of CDMAM images to evaluate the system. This study focuses on the potential variability in the estimated system performance that is associated with these methods. Sets of 36 CDMAM images were acquired under carefully controlled conditions from three different digital mammography systems. The threshold visibility thickness (TVT) for each disk diameter was determined using previously reported post-analysis methods from the CDCOM scorings for a randomly selected group of eight images for one measurement trial. This random selection process was repeated 3000 times to estimate the variability in the resulting TVT values for each disk diameter. The results from using different post-analysis methods, different random selection strategies and different digital systems were compared. Additional variability of the 0.1 mm disk diameter was explored by comparing the results from two different image data sets acquired under the same conditions from the same system. The magnitude and the type of error estimated for experimental data was explained through modeling. The modeled results also suggest a limitation in the current phantom design for the 0.1 mm diameter disks. Through modeling, it was also found that, because of the binomial statistic nature of the CDMAM test, the true variability of the test could be underestimated by the commonly used method of random re-sampling.
Controllability of social networks and the strategic use of random information.
Cremonini, Marco; Casamassima, Francesca
2017-01-01
This work is aimed at studying realistic social control strategies for social networks based on the introduction of random information into the state of selected driver agents. Deliberately exposing selected agents to random information is a technique already experimented in recommender systems or search engines, and represents one of the few options for influencing the behavior of a social context that could be accepted as ethical, could be fully disclosed to members, and does not involve the use of force or of deception. Our research is based on a model of knowledge diffusion applied to a time-varying adaptive network and considers two well-known strategies for influencing social contexts: One is the selection of few influencers for manipulating their actions in order to drive the whole network to a certain behavior; the other, instead, drives the network behavior acting on the state of a large subset of ordinary, scarcely influencing users. The two approaches have been studied in terms of network and diffusion effects. The network effect is analyzed through the changes induced on network average degree and clustering coefficient, while the diffusion effect is based on two ad hoc metrics which are defined to measure the degree of knowledge diffusion and skill level, as well as the polarization of agent interests. The results, obtained through simulations on synthetic networks, show a rich dynamics and strong effects on the communication structure and on the distribution of knowledge and skills. These findings support our hypothesis that the strategic use of random information could represent a realistic approach to social network controllability, and that with both strategies, in principle, the control effect could be remarkable.
Xiao, Yongling; Abrahamowicz, Michal
2010-03-30
We propose two bootstrap-based methods to correct the standard errors (SEs) from Cox's model for within-cluster correlation of right-censored event times. The cluster-bootstrap method resamples, with replacement, only the clusters, whereas the two-step bootstrap method resamples (i) the clusters, and (ii) individuals within each selected cluster, with replacement. In simulations, we evaluate both methods and compare them with the existing robust variance estimator and the shared gamma frailty model, which are available in statistical software packages. We simulate clustered event time data, with latent cluster-level random effects, which are ignored in the conventional Cox's model. For cluster-level covariates, both proposed bootstrap methods yield accurate SEs, and type I error rates, and acceptable coverage rates, regardless of the true random effects distribution, and avoid serious variance under-estimation by conventional Cox-based standard errors. However, the two-step bootstrap method over-estimates the variance for individual-level covariates. We also apply the proposed bootstrap methods to obtain confidence bands around flexible estimates of time-dependent effects in a real-life analysis of cluster event times.
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2012 CFR
2012-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2011 CFR
2011-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2013 CFR
2013-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2010 CFR
2010-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
40 CFR 761.355 - Third level of sample selection.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of sample selection further reduces the size of the subsample to 100 grams which is suitable for the... procedures in § 761.353 of this part into 100 gram portions. (b) Use a random number generator or random number table to select one 100 gram size portion as a sample for a procedure used to simulate leachate...
NASA Astrophysics Data System (ADS)
Yuvchenko, S. A.; Ushakova, E. V.; Pavlova, M. V.; Alonova, M. V.; Zimnyakov, D. A.
2018-04-01
We consider the practical realization of a new optical probe method of the random media which is defined as the reference-free path length interferometry with the intensity moments analysis. A peculiarity in the statistics of the spectrally selected fluorescence radiation in laser-pumped dye-doped random medium is discussed. Previously established correlations between the second- and the third-order moments of the intensity fluctuations in the random interference patterns, the coherence function of the probe radiation, and the path difference probability density for the interfering partial waves in the medium are confirmed. The correlations were verified using the statistical analysis of the spectrally selected fluorescence radiation emitted by a laser-pumped dye-doped random medium. Water solution of Rhodamine 6G was applied as the doping fluorescent agent for the ensembles of the densely packed silica grains, which were pumped by the 532 nm radiation of a solid state laser. The spectrum of the mean path length for a random medium was reconstructed.
Noise sensitivity of portfolio selection in constant conditional correlation GARCH models
NASA Astrophysics Data System (ADS)
Varga-Haszonits, I.; Kondor, I.
2007-11-01
This paper investigates the efficiency of minimum variance portfolio optimization for stock price movements following the Constant Conditional Correlation GARCH process proposed by Bollerslev. Simulations show that the quality of portfolio selection can be improved substantially by computing optimal portfolio weights from conditional covariances instead of unconditional ones. Measurement noise can be further reduced by applying some filtering method on the conditional correlation matrix (such as Random Matrix Theory based filtering). As an empirical support for the simulation results, the analysis is also carried out for a time series of S&P500 stock prices.
Ma, Li; Fan, Suohai
2017-03-14
The random forests algorithm is a type of classifier with prominent universality, a wide application range, and robustness for avoiding overfitting. But there are still some drawbacks to random forests. Therefore, to improve the performance of random forests, this paper seeks to improve imbalanced data processing, feature selection and parameter optimization. We propose the CURE-SMOTE algorithm for the imbalanced data classification problem. Experiments on imbalanced UCI data reveal that the combination of Clustering Using Representatives (CURE) enhances the original synthetic minority oversampling technique (SMOTE) algorithms effectively compared with the classification results on the original data using random sampling, Borderline-SMOTE1, safe-level SMOTE, C-SMOTE, and k-means-SMOTE. Additionally, the hybrid RF (random forests) algorithm has been proposed for feature selection and parameter optimization, which uses the minimum out of bag (OOB) data error as its objective function. Simulation results on binary and higher-dimensional data indicate that the proposed hybrid RF algorithms, hybrid genetic-random forests algorithm, hybrid particle swarm-random forests algorithm and hybrid fish swarm-random forests algorithm can achieve the minimum OOB error and show the best generalization ability. The training set produced from the proposed CURE-SMOTE algorithm is closer to the original data distribution because it contains minimal noise. Thus, better classification results are produced from this feasible and effective algorithm. Moreover, the hybrid algorithm's F-value, G-mean, AUC and OOB scores demonstrate that they surpass the performance of the original RF algorithm. Hence, this hybrid algorithm provides a new way to perform feature selection and parameter optimization.
Territory surveillance and prey management: Wolves keep track of space and time.
Schlägel, Ulrike E; Merrill, Evelyn H; Lewis, Mark A
2017-10-01
Identifying behavioral mechanisms that underlie observed movement patterns is difficult when animals employ sophisticated cognitive-based strategies. Such strategies may arise when timing of return visits is important, for instance to allow for resource renewal or territorial patrolling. We fitted spatially explicit random-walk models to GPS movement data of six wolves ( Canis lupus ; Linnaeus, 1758) from Alberta, Canada to investigate the importance of the following: (1) territorial surveillance likely related to renewal of scent marks along territorial edges, to reduce intraspecific risk among packs, and (2) delay in return to recently hunted areas, which may be related to anti-predator responses of prey under varying prey densities. The movement models incorporated the spatiotemporal variable "time since last visit," which acts as a wolf's memory index of its travel history and is integrated into the movement decision along with its position in relation to territory boundaries and information on local prey densities. We used a model selection framework to test hypotheses about the combined importance of these variables in wolf movement strategies. Time-dependent movement for territory surveillance was supported by all wolf movement tracks. Wolves generally avoided territory edges, but this avoidance was reduced as time since last visit increased. Time-dependent prey management was weak except in one wolf. This wolf selected locations with longer time since last visit and lower prey density, which led to a longer delay in revisiting high prey density sites. Our study shows that we can use spatially explicit random walks to identify behavioral strategies that merge environmental information and explicit spatiotemporal information on past movements (i.e., "when" and "where") to make movement decisions. The approach allows us to better understand cognition-based movement in relation to dynamic environments and resources.
Multidimensional density shaping by sigmoids.
Roth, Z; Baram, Y
1996-01-01
An estimate of the probability density function of a random vector is obtained by maximizing the output entropy of a feedforward network of sigmoidal units with respect to the input weights. Classification problems can be solved by selecting the class associated with the maximal estimated density. Newton's optimization method, applied to the estimated density, yields a recursive estimator for a random variable or a random sequence. A constrained connectivity structure yields a linear estimator, which is particularly suitable for "real time" prediction. A Gaussian nonlinearity yields a closed-form solution for the network's parameters, which may also be used for initializing the optimization algorithm when other nonlinearities are employed. A triangular connectivity between the neurons and the input, which is naturally suggested by the statistical setting, reduces the number of parameters. Applications to classification and forecasting problems are demonstrated.
Hazard Function Estimation with Cause-of-Death Data Missing at Random.
Wang, Qihua; Dinse, Gregg E; Liu, Chunling
2012-04-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data.
Data processing for GPS common view time comparison between remote clocks
NASA Astrophysics Data System (ADS)
Li, Bian
2004-12-01
GPS CV method will play an important role in JATC (joint atomic time of China) system which is being rebuilt. The selection of common view data and the methods of filtering the random noise from the observed data are introduced. The methods to correct ionospheric delay and geometric delay for GPS CV comparison are expounded. The calculation results for the data of CV comparison between NTSC (National Time Service Conter, the Chinese Academy of Sciences) and CRL (Communications Research Laboratory, which has been renamed National Institute of Information and Communications Technology) are presented.
Gossip and Distributed Kalman Filtering: Weak Consensus Under Weak Detectability
NASA Astrophysics Data System (ADS)
Kar, Soummya; Moura, José M. F.
2011-04-01
The paper presents the gossip interactive Kalman filter (GIKF) for distributed Kalman filtering for networked systems and sensor networks, where inter-sensor communication and observations occur at the same time-scale. The communication among sensors is random; each sensor occasionally exchanges its filtering state information with a neighbor depending on the availability of the appropriate network link. We show that under a weak distributed detectability condition: 1. the GIKF error process remains stochastically bounded, irrespective of the instability properties of the random process dynamics; and 2. the network achieves \\emph{weak consensus}, i.e., the conditional estimation error covariance at a (uniformly) randomly selected sensor converges in distribution to a unique invariant measure on the space of positive semi-definite matrices (independent of the initial state.) To prove these results, we interpret the filtered states (estimates and error covariances) at each node in the GIKF as stochastic particles with local interactions. We analyze the asymptotic properties of the error process by studying as a random dynamical system the associated switched (random) Riccati equation, the switching being dictated by a non-stationary Markov chain on the network graph.
Differing antidepressant maintenance methodologies.
Safer, Daniel J
2017-10-01
The principle evidence that antidepressant medication (ADM) is an effective maintenance treatment for adults with major depressive disorder (MDD) is from placebo substitution trials. These trials enter responders from ADM efficacy trials into randomized, double-blind placebo-controlled (RDBPC) effectiveness trials to measure the rate of MDD relapse over time. However, other randomized maintenance trial methodologies merit consideration and comparison. A systematic review of ADM randomized maintenance trials included research reports from multiple databases. Relapse rate was the main effectiveness outcome assessed. Five ADM randomized maintenance methodologies for MDD responders are described and compared for outcome. These effectiveness trials include: placebo-substitution, ADM/placebo extension, ADM extension, ADM vs. psychotherapy, and treatment as usual. The placebo-substitution trials for those abruptly switched to placebo resulted in unusually high (46%) rates of relapse over 6-12months, twice the continuing ADM rate. These trials were characterized by selective screening, high attrition, an anxious anticipation of a switch to placebo, and a risk of drug withdrawal symptoms. Selectively screened ADM efficacy responders who entered into 4-12month extension trials experienced relapse rates averaging ~10% with a low attrition rate. Non-industry sponsored randomized trials of adults with multiple prior MDD episodes who were treated with ADM maintenance for 1-2years experienced relapse rates averaging 40%. Placebo substitution trial methodology represents only one approach to assess ADM maintenance. Antidepressant maintenance research for adults with MDD should be evaluated for industry sponsorship, attrition, the impact of the switch to placebo, and major relapse differences in MDD subpopulations. Copyright © 2017. Published by Elsevier Inc.
Random ensemble learning for EEG classification.
Hosseini, Mohammad-Parsa; Pompili, Dario; Elisevich, Kost; Soltanian-Zadeh, Hamid
2018-01-01
Real-time detection of seizure activity in epilepsy patients is critical in averting seizure activity and improving patients' quality of life. Accurate evaluation, presurgical assessment, seizure prevention, and emergency alerts all depend on the rapid detection of seizure onset. A new method of feature selection and classification for rapid and precise seizure detection is discussed wherein informative components of electroencephalogram (EEG)-derived data are extracted and an automatic method is presented using infinite independent component analysis (I-ICA) to select independent features. The feature space is divided into subspaces via random selection and multichannel support vector machines (SVMs) are used to classify these subspaces. The result of each classifier is then combined by majority voting to establish the final output. In addition, a random subspace ensemble using a combination of SVM, multilayer perceptron (MLP) neural network and an extended k-nearest neighbors (k-NN), called extended nearest neighbor (ENN), is developed for the EEG and electrocorticography (ECoG) big data problem. To evaluate the solution, a benchmark ECoG of eight patients with temporal and extratemporal epilepsy was implemented in a distributed computing framework as a multitier cloud-computing architecture. Using leave-one-out cross-validation, the accuracy, sensitivity, specificity, and both false positive and false negative ratios of the proposed method were found to be 0.97, 0.98, 0.96, 0.04, and 0.02, respectively. Application of the solution to cases under investigation with ECoG has also been effected to demonstrate its utility. Copyright © 2017 Elsevier B.V. All rights reserved.
Affect-Aware Adaptive Tutoring Based on Human-Automation Etiquette Strategies.
Yang, Euijung; Dorneich, Michael C
2018-06-01
We investigated adapting the interaction style of intelligent tutoring system (ITS) feedback based on human-automation etiquette strategies. Most ITSs adapt the content difficulty level, adapt the feedback timing, or provide extra content when they detect cognitive or affective decrements. Our previous work demonstrated that changing the interaction style via different feedback etiquette strategies has differential effects on students' motivation, confidence, satisfaction, and performance. The best etiquette strategy was also determined by user frustration. Based on these findings, a rule set was developed that systemically selected the proper etiquette strategy to address one of four learning factors (motivation, confidence, satisfaction, and performance) under two different levels of user frustration. We explored whether etiquette strategy selection based on this rule set (systematic) or random changes in etiquette strategy for a given level of frustration affected the four learning factors. Participants solved mathematics problems under different frustration conditions with feedback that adapted dynamic changes in etiquette strategies either systematically or randomly. The results demonstrated that feedback with etiquette strategies chosen systematically via the rule set could selectively target and improve motivation, confidence, satisfaction, and performance more than changing etiquette strategies randomly. The systematic adaptation was effective no matter the level of frustration for the participant. If computer tutors can vary the interaction style to effectively mitigate negative emotions, then ITS designers would have one more mechanism in which to design affect-aware adaptations that provide the proper responses in situations where human emotions affect the ability to learn.
Turnbull, Alison E; O'Connor, Cristi L; Lau, Bryan; Halpern, Scott D; Needham, Dale M
2015-07-29
Survey response rates among physicians are declining, and determining an appropriate level of compensation to motivate participation poses a major challenge. To estimate the effect of permitting intensive care physicians to select their preferred level of compensation for completing a short Web-based survey on physician (1) response rate, (2) survey completion rate, (3) time to response, and (4) time spent completing the survey. A total of 1850 US intensivists from an existing database were randomized to receive a survey invitation email with or without an Amazon.com incentive available to the first 100 respondents. The incentive could be instantly redeemed for an amount chosen by the respondent, up to a maximum of US $50. The overall response rate was 35.90% (630/1755). Among the 35.4% (111/314) of eligible participants choosing the incentive, 80.2% (89/111) selected the maximum value. Among intensivists offered an incentive, the response was 6.0% higher (95% CI 1.5-10.5, P=.01), survey completion was marginally greater (807/859, 94.0% vs 892/991, 90.0%; P=.06), and the median number of days to survey response was shorter (0.8, interquartile range [IQR] 0.2-14.4 vs 6.6, IQR 0.3-22.3; P=.001), with no difference in time spent completing the survey. Permitting intensive care physicians to determine compensation level for completing a short Web-based survey modestly increased response rate and substantially decreased response time without decreasing the time spent on survey completion.
O'Connor, Cristi L; Lau, Bryan; Halpern, Scott D; Needham, Dale M
2015-01-01
Background Survey response rates among physicians are declining, and determining an appropriate level of compensation to motivate participation poses a major challenge. Objective To estimate the effect of permitting intensive care physicians to select their preferred level of compensation for completing a short Web-based survey on physician (1) response rate, (2) survey completion rate, (3) time to response, and (4) time spent completing the survey. Methods A total of 1850 US intensivists from an existing database were randomized to receive a survey invitation email with or without an Amazon.com incentive available to the first 100 respondents. The incentive could be instantly redeemed for an amount chosen by the respondent, up to a maximum of US $50. Results The overall response rate was 35.90% (630/1755). Among the 35.4% (111/314) of eligible participants choosing the incentive, 80.2% (89/111) selected the maximum value. Among intensivists offered an incentive, the response was 6.0% higher (95% CI 1.5-10.5, P=.01), survey completion was marginally greater (807/859, 94.0% vs 892/991, 90.0%; P=.06), and the median number of days to survey response was shorter (0.8, interquartile range [IQR] 0.2-14.4 vs 6.6, IQR 0.3-22.3; P=.001), with no difference in time spent completing the survey. Conclusions Permitting intensive care physicians to determine compensation level for completing a short Web-based survey modestly increased response rate and substantially decreased response time without decreasing the time spent on survey completion. PMID:26223821
Laxmaiah, A; Meshram, I I; Arlappa, N; Balakrishna, N; Rao, K Mallikharjuna; Reddy, Ch Gal; Ravindranath, M; Kumar, Sharad; Kumar, Hari; Brahmam, G N V
2015-05-01
An increase in prevalence of hypertension has been observed in all ethnic groups in India. The objective of the present study was to estimate prevalence and determinants of hypertension among tribals and their awareness, treatment practices and risk behaviours in nine States of India. A community based cross-sectional study adopting multistage random sampling procedure was carried out. About 120 Integrated Tribal Development Authority villages were selected randomly from each State. From each village, 40 households were covered randomly. All men and women ≥ 20 yr of age in the selected households were included for various investigations. A total of 21141 men and 26260 women participated in the study. The prevalence of hypertension after age adjustment was 27.1 and 26.4 per cent among men and women, respectively. It was higher in the s0 tates of Odisha (50-54.4%) and Kerala (36.7-45%) and lowest in Gujarat (7-11.5%). The risk of hypertension was 6-8 times higher in elderly people and 2-3 times in 35-59 yr compared with 20-34 yr. Only <10 per cent of men and women were known hypertensives and more than half on treatment (55-68%). Men with general and abdominal obesity were at 1.69 (CI: 1.43-2.01) and 2.42 (CI: 2.01-2.91) times higher risk of hypertension, respectively, while it was 2.03 (CI=1.77-2.33) and 2.35 (CI 2.12-2.60) times higher in women. Those using tobacco and consuming alcohol were at a higher risk of hypertension compared with the non users. The study revealed high prevalence of hypertension among tribals in India. Age, literacy, physical activity, consumption of tobacco, alcohol and obesity were significantly associated with hypertension. Awareness and knowledge about hypertension and health seeking behaviour were low. Appropriate intervention strategies need to be adopted to increase awareness and treatment practices of hypertension among tribals.
Sperm competition games: sperm selection by females.
Ball, M A; Parker, G A
2003-09-07
We analyse a co-evolutionary sexual conflict game, in which males compete for fertilizations (sperm competition) and females operate sperm selection against unfavourable ejaculates (cryptic female choice). For simplicity, each female mates with two males per reproductive event, and the competing ejaculates are of two types, favourable (having high viability or success) or unfavourable (where progeny are less successful). Over evolutionary time, females can increase their level of sperm selection (measured as the proportion of unfavourable sperm eliminated) by paying a fecundity cost. Males can regulate sperm allocations depending on whether they will be favoured or disfavoured, but increasing sperm allocation reduces their mating rate. The resolution of this game depends on whether males are equal, or unequal. Males could be equal: each is favoured with probability, p, reflecting the proportion of females in the population that favour his ejaculate (the 'random-roles' model); different males are favoured by different sets of females. Alternatively, males could be unequal: given males are perceived consistently by all females as two distinct types, favoured and disfavoured, where p is now the frequency of the favoured male type in the population (the 'constant-types' model). In both cases, the evolutionarily stable strategy (ESS) is for females initially to increase sperm selection from zero as the viability of offspring from unfavourable ejaculates falls below that of favourable ejaculates. But in the random-roles model, sperm selection decreases again towards zero as the unfavourable ejaculates become disastrous (i.e. as their progeny viability decreases towards zero). This occurs because males avoid expenditure in unfavourable matings, to conserve sperm for matings in the favoured role where their offspring have high viability, thus allowing females to relax sperm selection. If sperm selection is costly to females, ESS sperm selection is high across a region of intermediate viabilities. If it is uncostly, there is no ESS in this region unless sperm limitation (i.e. some eggs fail to be fertilized because sperm numbers are too low) is included into the model. In the constant-types model, no relaxation of sperm selection occurs at very low viabilities of disfavoured male progeny. If sperm selection is sufficiently costly, ESS sperm selection increases as progeny viability decreases down towards zero; but if it is uncostly, there is no ESS at the lowest viabilities, and unlike the random-roles model, this cannot be stabilized by including sperm limitation. Sperm allocations in the ESS regions differ between the two models. With random roles, males always allocate more sperm in the favoured role. With constant types, the male type that is favoured allocates less sperm than the disfavoured type. These results suggests that empiricists studying cryptic female choice and sperm allocation patterns need to determine whether sperm selection is applied differently, or consistently, on given males by different females in the same population.
Dynamics of Tree Species Diversity in Unlogged and Selectively Logged Malaysian Forests.
Shima, Ken; Yamada, Toshihiro; Okuda, Toshinori; Fletcher, Christine; Kassim, Abdul Rahman
2018-01-18
Selective logging that is commonly conducted in tropical forests may change tree species diversity. In rarely disturbed tropical forests, locally rare species exhibit higher survival rates. If this non-random process occurs in a logged forest, the forest will rapidly recover its tree species diversity. Here we determined whether a forest in the Pasoh Forest Reserve, Malaysia, which was selectively logged 40 years ago, recovered its original species diversity (species richness and composition). To explore this, we compared the dynamics of secies diversity between unlogged forest plot (18.6 ha) and logged forest plot (5.4 ha). We found that 40 years are not sufficient to recover species diversity after logging. Unlike unlogged forests, tree deaths and recruitments did not contribute to increased diversity in the selectively logged forests. Our results predict that selectively logged forests require a longer time at least than our observing period (40 years) to regain their diversity.
THE SELECTION OF A NATIONAL RANDOM SAMPLE OF TEACHERS FOR EXPERIMENTAL CURRICULUM EVALUATION.
ERIC Educational Resources Information Center
WELCH, WAYNE W.; AND OTHERS
MEMBERS OF THE EVALUATION SECTION OF HARVARD PROJECT PHYSICS, DESCRIBING WHAT IS SAID TO BE THE FIRST ATTEMPT TO SELECT A NATIONAL RANDOM SAMPLE OF (HIGH SCHOOL PHYSICS) TEACHERS, LIST THE STEPS AS (1) PURCHASE OF A LIST OF PHYSICS TEACHERS FROM THE NATIONAL SCIENCE TEACHERS ASSOCIATION (MOST COMPLETE AVAILABLE), (2) SELECTION OF 136 NAMES BY A…
ERIC Educational Resources Information Center
Ng, Daniel; Supaporn, Potibut
A study investigated the trend of current U.S. television commercial informativeness by comparing the results with Alan Resnik and Bruce Stern's previous benchmark study conducted in 1977. A systematic random sampling procedure was used to select viewing dates and times of commercials from the three national networks. Ultimately, a total of 550…
The Effect of Task Repetition on Fluency and Accuracy of EFL Saudi Female Learners' Oral Performance
ERIC Educational Resources Information Center
Gashan, Amani K.; Almohaisen, Fahad M.
2014-01-01
This study aimed to examine the effect of task repetition on foreign language output. Twenty eight Saudi female students in the Preparatory Year (PY) at King Saud university, were randomly selected to conduct an oral information-gap task. The participants were asked to perform the task two times with two-week interval between the two performances.…
ERIC Educational Resources Information Center
Athanasou, James A.
The topic of repeated judgments of interest in vocational education was examined in a study in which 10 female full-time technical and further education (TAFE) students (aged 15-60 years) were handed 120 randomly selected real profiles of TAFE students who had completed subject interest surveys in a previous study. The 10 TAFE students judged how…
The adjusting factor method for weight-scaling truckloads of mixed hardwood sawlogs
Edward L. Adams
1976-01-01
A new method of weight-scaling truckloads of mixed hardwood sawlogs systematically adjusts for changes in the weight/volume ratio of logs coming into a sawmill. It uses a conversion factor based on the running average of weight/volume ratios of randomly selected sample loads. A test of the method indicated that over a period of time the weight-scaled volume should...
Unbiased feature selection in learning random forests for high-dimensional data.
Nguyen, Thanh-Tung; Huang, Joshua Zhexue; Nguyen, Thuy Thi
2015-01-01
Random forests (RFs) have been widely used as a powerful classification method. However, with the randomization in both bagging samples and feature selection, the trees in the forest tend to select uninformative features for node splitting. This makes RFs have poor accuracy when working with high-dimensional data. Besides that, RFs have bias in the feature selection process where multivalued features are favored. Aiming at debiasing feature selection in RFs, we propose a new RF algorithm, called xRF, to select good features in learning RFs for high-dimensional data. We first remove the uninformative features using p-value assessment, and the subset of unbiased features is then selected based on some statistical measures. This feature subset is then partitioned into two subsets. A feature weighting sampling technique is used to sample features from these two subsets for building trees. This approach enables one to generate more accurate trees, while allowing one to reduce dimensionality and the amount of data needed for learning RFs. An extensive set of experiments has been conducted on 47 high-dimensional real-world datasets including image datasets. The experimental results have shown that RFs with the proposed approach outperformed the existing random forests in increasing the accuracy and the AUC measures.
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
High-Tg Polynorbornene-Based Block and Random Copolymers for Butanol Pervaporation Membranes
NASA Astrophysics Data System (ADS)
Register, Richard A.; Kim, Dong-Gyun; Takigawa, Tamami; Kashino, Tomomasa; Burtovyy, Oleksandr; Bell, Andrew
Vinyl addition polymers of substituted norbornene (NB) monomers possess desirably high glass transition temperatures (Tg); however, until very recently, the lack of an applicable living polymerization chemistry has precluded the synthesis of such polymers with controlled architecture, or copolymers with controlled sequence distribution. We have recently synthesized block and random copolymers of NB monomers bearing hydroxyhexafluoroisopropyl and n-butyl substituents (HFANB and BuNB) via living vinyl addition polymerization with Pd-based catalysts. Both series of polymers were cast into the selective skin layers of thin film composite (TFC) membranes, and these organophilic membranes investigated for the isolation of n-butanol from dilute aqueous solution (model fermentation broth) via pervaporation. The block copolymers show well-defined microphase-separated morphologies, both in bulk and as the selective skin layers on TFC membranes, while the random copolymers are homogeneous. Both block and random vinyl addition copolymers are effective as n-butanol pervaporation membranes, with the block copolymers showing a better flux-selectivity balance. While polyHFANB has much higher permeability and n-butanol selectivity than polyBuNB, incorporating BuNB units into the polymer (in either a block or random sequence) limits the swelling of the polyHFANB and thereby improves the n-butanol pervaporation selectivity.
Owens, Scott R; Wiehagen, Luke T; Kelly, Susan M; Piccoli, Anthony L; Lassige, Karen; Yousem, Samuel A; Dhir, Rajiv; Parwani, Anil V
2010-09-01
We recently implemented a novel pre-sign-out quality assurance tool in our subspecialty-based surgical pathology practice at the University of Pittsburgh Medical Center. It randomly selects an adjustable percentage of cases for review by a second pathologist at the time the originating pathologist's electronic signature is entered and requires that the review be completed within 24 hours, before release of the final report. The tool replaced a retrospective audit system and it has been in successful use since January 2009. We report our initial experience for the first 14 months of its service. During this time, the disagreement numbers and levels were similar to those identified using the retrospective system, case turnaround time was not significantly affected, and the number of case amendments generated decreased. The tool is a useful quality assurance instrument and its prospective nature allows for the potential prevention of some serious errors.
Tanash, Hanan A; Ekström, Magnus; Rönmark, Eva; Lindberg, Anne; Piitulainen, Eeva
2017-09-01
Knowledge about the natural history of severe alpha 1-antitrypsin (AAT) deficiency (PiZZ) is limited. Our aim was to compare the survival of PiZZ individuals with randomly selected controls from the Swedish general population.The PiZZ subjects (n=1585) were selected from the Swedish National AATD Register. The controls (n=5999) were randomly selected from the Swedish population register. Smoking habits were known for all subjects.Median follow-up times for the PiZZ subjects (731 never-smokers) and controls (3179 never-smokers) were 12 and 17 years, respectively (p<0.001). During follow-up, 473 PiZZ subjects (30%), and 747 controls (12%) died. The PiZZ subjects had a significantly shorter survival time than the controls, p<0.001. After adjustment for gender, age, smoking habits and presence of respiratory symptoms, the risk of death was still significantly higher for the PiZZ individuals than for the controls, hazard ratio (HR) 3.2 (95% CI 2.8-3.6; p<0.001). By contrast, the risk of death was not increased in never-smoking PiZZ individuals identified by screening, compared to never-smoking controls, HR 1.2 (95% CI 0.6-2.2).The never-smoking PiZZ individuals identified by screening had a similar life expectancy to the never-smokers in the Swedish general population. Early diagnosis of AAT deficiency is of utmost importance. Copyright ©ERS 2017.
Systematic review: Outcome reporting bias is a problem in high impact factor neurology journals
Scott, Jared T.; Blubaugh, Mark; Roepke, Brie; Scheckel, Caleb; Vassar, Matt
2017-01-01
Background Selective outcome reporting is a significant methodological concern. Comparisons between the outcomes reported in clinical trial registrations and those later published allow investigators to understand the extent of selection bias among trialists. We examined the possibility of selective outcome reporting in randomized controlled trials (RCTs) published in neurology journals. Methods We searched PubMed for randomized controlled trials from Jan 1, 2010 –Dec 31, 2015 published in the top 3 impact factor neurology journals. These articles were screened according to specific inclusion criteria. Each author individually extracted data from trials following a standardized protocol. A second author verified each extracted element and discrepancies were resolved. Consistency between registered and published outcomes was evaluated and correlations between discrepancies and funding, journal, and temporal trends were examined. Results 180 trials were included for analysis. 10 (6%) primary outcomes were demoted, 38 (21%) primary outcomes were omitted from the publication, and 61 (34%) unregistered primary outcomes were added to the published report. There were 18 (10%) cases of secondary outcomes being upgraded to primary outcomes in the publication, and there were 53 (29%) changes in timing of assessment. Of 82 (46%) major discrepancies with reported p-values, 54 (66.0%) favored publication of statistically significant results. Conclusion Across trials, we found 180 major discrepancies. 66% of major discrepancies with a reported p-value (n = 82) favored statistically significant results. These results suggest a need within neurology to provide more consistent and timely registration of outcomes. PMID:28727834
Systematic review: Outcome reporting bias is a problem in high impact factor neurology journals.
Howard, Benjamin; Scott, Jared T; Blubaugh, Mark; Roepke, Brie; Scheckel, Caleb; Vassar, Matt
2017-01-01
Selective outcome reporting is a significant methodological concern. Comparisons between the outcomes reported in clinical trial registrations and those later published allow investigators to understand the extent of selection bias among trialists. We examined the possibility of selective outcome reporting in randomized controlled trials (RCTs) published in neurology journals. We searched PubMed for randomized controlled trials from Jan 1, 2010 -Dec 31, 2015 published in the top 3 impact factor neurology journals. These articles were screened according to specific inclusion criteria. Each author individually extracted data from trials following a standardized protocol. A second author verified each extracted element and discrepancies were resolved. Consistency between registered and published outcomes was evaluated and correlations between discrepancies and funding, journal, and temporal trends were examined. 180 trials were included for analysis. 10 (6%) primary outcomes were demoted, 38 (21%) primary outcomes were omitted from the publication, and 61 (34%) unregistered primary outcomes were added to the published report. There were 18 (10%) cases of secondary outcomes being upgraded to primary outcomes in the publication, and there were 53 (29%) changes in timing of assessment. Of 82 (46%) major discrepancies with reported p-values, 54 (66.0%) favored publication of statistically significant results. Across trials, we found 180 major discrepancies. 66% of major discrepancies with a reported p-value (n = 82) favored statistically significant results. These results suggest a need within neurology to provide more consistent and timely registration of outcomes.
Tryjanowski, Piotr; Adamski, Zbigniew
2007-10-01
Fluctuating asymmetry (FA) that reflects randomly directed deviations from bilateral symmetry has been shown to increase in organisms exposed to environmental and/or genetic stress. We studied fluctuating asymmetry in head and prothorax of chewing lice Docophorulus coarctatus, a parasite of the great grey shrike Lanius excubitor, to investigate associations between parasite body size and fluctuating asymmetry. Samples of ten individual lice (five females, five males) were randomly collected for measurements from 32 shrikes. Relative FA (scaled to trait size) was estimated for head and prothorax. Sex and trait differences in FA were very distinct (all differences significant at P<0.001). However, relative FA of head was not a predictor to relative FA of prothorax in either sex. Moreover, relative FA measurements of females and males from the same host were not significantly correlated, contrary to expectation if hosts imposed similar selection pressures on parasites of the two sexes. Relative FA of a trait was negatively related to its size, except for the relationship between relative FA of prothorax and prothorax in females. Differences in FA between traits may be explained by time when host condition affects louse developmental biology, with the head developing in a very short time during larval and nymph stages. The sex difference in asymmetry was probably related to different selection pressures by hosts on the sexes of the parasite, with females generally being under more intense selection.
Robust portfolio selection based on asymmetric measures of variability of stock returns
NASA Astrophysics Data System (ADS)
Chen, Wei; Tan, Shaohua
2009-10-01
This paper addresses a new uncertainty set--interval random uncertainty set for robust optimization. The form of interval random uncertainty set makes it suitable for capturing the downside and upside deviations of real-world data. These deviation measures capture distributional asymmetry and lead to better optimization results. We also apply our interval random chance-constrained programming to robust mean-variance portfolio selection under interval random uncertainty sets in the elements of mean vector and covariance matrix. Numerical experiments with real market data indicate that our approach results in better portfolio performance.
Straudi, Sofia; Severini, Giacomo; Sabbagh Charabati, Amira; Pavarelli, Claudia; Gamberini, Giulia; Scotti, Anna; Basaglia, Nino
2017-05-10
Patients with traumatic brain injury often have balance and attentive disorders. Video game therapy (VGT) has been proposed as a new intervention to improve mobility and attention through a reward-learning approach. In this pilot randomized, controlled trial, we tested the effects of VGT, compared with a balance platform therapy (BPT), on balance, mobility and selective attention in chronic traumatic brain injury patients. We enrolled chronic traumatic brain injury patients (n = 21) that randomly received VGT or BPT for 3 sessions per week for 6 weeks. The clinical outcome measures included: i) the Community Balance & Mobility Scale (CB&M); ii) the Unified Balance Scale (UBS); iii) the Timed Up and Go test (TUG); iv) static balance and v) selective visual attention evaluation (Go/Nogo task). Both groups improved in CB&M scores, but only the VGT group increased on the UBS and TUG with a between-group significance (p < 0.05). Selective attention improved significantly in the VGT group (p < 0.01). Video game therapy is an option for the management of chronic traumatic brain injury patients to ameliorate balance and attention deficits. NCT01883830 , April 5 2013.
Tucker, Jalie A.; Reed, Geoffrey M.
2008-01-01
This paper examines the utility of evidentiary pluralism, a research strategy that selects methods in service of content questions, in the context of rehabilitation psychology. Hierarchical views that favor randomized controlled clinical trials (RCTs) over other evidence are discussed, and RCTs are considered as they intersect with issues in the field. RCTs are vital for establishing treatment efficacy, but whether they are uniformly the best evidence to inform practice is critically evaluated. We argue that because treatment is only one of several variables that influence functioning, disability, and participation over time, an expanded set of conceptual and data analytic approaches should be selected in an informed way to support an expanded research agenda that investigates therapeutic and extra-therapeutic influences on rehabilitation processes and outcomes. The benefits of evidentiary pluralism are considered, including helping close the gap between the narrower clinical rehabilitation model and a public health disability model. KEY WORDS: evidence-based practice, evidentiary pluralism, rehabilitation psychology, randomized controlled trials PMID:19649150
Adaptive consensus of scale-free multi-agent system by randomly selecting links
NASA Astrophysics Data System (ADS)
Mou, Jinping; Ge, Huafeng
2016-06-01
This paper investigates an adaptive consensus problem for distributed scale-free multi-agent systems (SFMASs) by randomly selecting links, where the degree of each node follows a power-law distribution. The randomly selecting links are based on the assumption that every agent decides to select links among its neighbours according to the received data with a certain probability. Accordingly, a novel consensus protocol with the range of the received data is developed, and each node updates its state according to the protocol. By the iterative method and Cauchy inequality, the theoretical analysis shows that all errors among agents converge to zero, and in the meanwhile, several criteria of consensus are obtained. One numerical example shows the reliability of the proposed methods.
Effectiveness of Nursing Management Information Systems: A Systematic Review
Choi, Mona; Yang, You Lee
2014-01-01
Objectives The purpose of this study was to review evaluation studies of nursing management information systems (NMISs) and their outcome measures to examine system effectiveness. Methods For the systematic review, a literature search of the PubMed, CINAHL, Embase, and Cochrane Library databases was conducted to retrieve original articles published between 1970 and 2014. Medical Subject Headings (MeSH) terms included informatics, medical informatics, nursing informatics, medical informatics application, and management information systems for information systems and evaluation studies and nursing evaluation research for evaluation research. Additionally, manag* and admin*, and nurs* were combined. Title, abstract, and full-text reviews were completed by two reviewers. And then, year, author, type of management system, study purpose, study design, data source, system users, study subjects, and outcomes were extracted from the selected articles. The quality and risk of bias of the studies that were finally selected were assessed with the Risk of Bias Assessment Tool for Non-randomized Studies (RoBANS) criteria. Results Out of the 2,257 retrieved articles, a total of six articles were selected. These included two scheduling programs, two nursing cost-related programs, and two patient care management programs. For the outcome measurements, usefulness, time saving, satisfaction, cost, attitude, usability, data quality/completeness/accuracy, and personnel work patterns were included. User satisfaction, time saving, and usefulness mostly showed positive findings. Conclusions The study results suggest that NMISs were effective in time saving and useful in nursing care. Because there was a lack of quality in the reviewed studies, well-designed research, such as randomized controlled trials, should be conducted to more objectively evaluate the effectiveness of NMISs. PMID:25405060
Wright, Marvin N; Dankowski, Theresa; Ziegler, Andreas
2017-04-15
The most popular approach for analyzing survival data is the Cox regression model. The Cox model may, however, be misspecified, and its proportionality assumption may not always be fulfilled. An alternative approach for survival prediction is random forests for survival outcomes. The standard split criterion for random survival forests is the log-rank test statistic, which favors splitting variables with many possible split points. Conditional inference forests avoid this split variable selection bias. However, linear rank statistics are utilized by default in conditional inference forests to select the optimal splitting variable, which cannot detect non-linear effects in the independent variables. An alternative is to use maximally selected rank statistics for the split point selection. As in conditional inference forests, splitting variables are compared on the p-value scale. However, instead of the conditional Monte-Carlo approach used in conditional inference forests, p-value approximations are employed. We describe several p-value approximations and the implementation of the proposed random forest approach. A simulation study demonstrates that unbiased split variable selection is possible. However, there is a trade-off between unbiased split variable selection and runtime. In benchmark studies of prediction performance on simulated and real datasets, the new method performs better than random survival forests if informative dichotomous variables are combined with uninformative variables with more categories and better than conditional inference forests if non-linear covariate effects are included. In a runtime comparison, the method proves to be computationally faster than both alternatives, if a simple p-value approximation is used. Copyright © 2017 John Wiley & Sons, Ltd. Copyright © 2017 John Wiley & Sons, Ltd.
Mixing rates and limit theorems for random intermittent maps
NASA Astrophysics Data System (ADS)
Bahsoun, Wael; Bose, Christopher
2016-04-01
We study random transformations built from intermittent maps on the unit interval that share a common neutral fixed point. We focus mainly on random selections of Pomeu-Manneville-type maps {{T}α} using the full parameter range 0<α <∞ , in general. We derive a number of results around a common theme that illustrates in detail how the constituent map that is fastest mixing (i.e. smallest α) combined with details of the randomizing process, determines the asymptotic properties of the random transformation. Our key result (theorem 1.1) establishes sharp estimates on the position of return time intervals for the quenched dynamics. The main applications of this estimate are to limit laws (in particular, CLT and stable laws, depending on the parameters chosen in the range 0<α <1 ) for the associated skew product; these are detailed in theorem 3.2. Since our estimates in theorem 1.1 also hold for 1≤slant α <∞ we study a second class of random transformations derived from piecewise affine Gaspard-Wang maps, prove existence of an infinite (σ-finite) invariant measure and study the corresponding correlation asymptotics. To the best of our knowledge, this latter kind of result is completely new in the setting of random transformations.
Effect of Expanding Medicaid for Parents on Children’s Health Insurance Coverage
DeVoe, Jennifer E.; Marino, Miguel; Angier, Heather; O’Malley, Jean P.; Crawford, Courtney; Nelson, Christine; Tillotson, Carrie J.; Bailey, Steffani R.; Gallia, Charles; Gold, Rachel
2016-01-01
IMPORTANCE In the United States, health insurance is not universal. Observational studies show an association between uninsured parents and children. This association persisted even after expansions in child-only public health insurance. Oregon’s randomized Medicaid expansion for adults, known as the Oregon Experiment, created a rare opportunity to assess causality between parent and child coverage. OBJECTIVE To estimate the effect on a child’s health insurance coverage status when (1) a parent randomly gains access to health insurance and (2) a parent obtains coverage. DESIGN, SETTING, AND PARTICIPANTS Oregon Experiment randomized natural experiment assessing the results of Oregon’s 2008 Medicaid expansion. We used generalized estimating equation models to examine the longitudinal effect of a parent randomly selected to apply for Medicaid on their child’s Medicaid or Children’s Health Insurance Program (CHIP) coverage (intent-to-treat analyses). We used per-protocol analyses to understand the impact on children’s coverage when a parent was randomly selected to apply for and obtained Medicaid. Participants included 14 409 children aged 2 to 18 years whose parents participated in the Oregon Experiment. EXPOSURES For intent-to-treat analyses, the date a parent was selected to apply for Medicaid was considered the date the child was exposed to the intervention. In per-protocol analyses, exposure was defined as whether a selected parent obtained Medicaid. MAIN OUTCOMES AND MEASURES Children’s Medicaid or CHIP coverage, assessed monthly and in 6-month intervals relative to their parent’s selection date. RESULTS In the immediate period after selection, children whose parents were selected to apply significantly increased from 3830 (61.4%) to 4152 (66.6%) compared with a nonsignificant change from 5049 (61.8%) to 5044 (61.7%) for children whose parents were not selected to apply. Children whose parents were randomly selected to apply for Medicaid had 18% higher odds of being covered in the first 6 months after parent’s selection compared with children whose parents were not selected (adjusted odds ratio [AOR] = 1.18; 95% CI, 1.10–1.27). The effect remained significant during months 7 to 12 (AOR = 1.11; 95% CI, 1.03–1.19); months 13 to 18 showed a positive but not significant effect (AOR = 1.07; 95% CI, 0.99–1.14). Children whose parents were selected and obtained coverage had more than double the odds of having coverage compared with children whose parents were not selected and did not gain coverage (AOR = 2.37; 95% CI, 2.14–2.64). CONCLUSIONS AND RELEVANCE Children’s odds of having Medicaid or CHIP coverage increased when their parents were randomly selected to apply for Medicaid. Children whose parents were selected and subsequently obtained coverage benefited most. This study demonstrates a causal link between parents’ access to Medicaid coverage and their children’s coverage. PMID:25561041
Design of a Mobile Brain Computer Interface-Based Smart Multimedia Controller
Tseng, Kevin C.; Lin, Bor-Shing; Wong, Alice May-Kuen; Lin, Bor-Shyh
2015-01-01
Music is a way of expressing our feelings and emotions. Suitable music can positively affect people. However, current multimedia control methods, such as manual selection or automatic random mechanisms, which are now applied broadly in MP3 and CD players, cannot adaptively select suitable music according to the user’s physiological state. In this study, a brain computer interface-based smart multimedia controller was proposed to select music in different situations according to the user’s physiological state. Here, a commercial mobile tablet was used as the multimedia platform, and a wireless multi-channel electroencephalograph (EEG) acquisition module was designed for real-time EEG monitoring. A smart multimedia control program built in the multimedia platform was developed to analyze the user’s EEG feature and select music according his/her state. The relationship between the user’s state and music sorted by listener’s preference was also examined in this study. The experimental results show that real-time music biofeedback according a user’s EEG feature may positively improve the user’s attention state. PMID:25756862
Pugliese, Laura; Woodriff, Molly; Crowley, Olga; Lam, Vivian; Sohn, Jeremy; Bradley, Scott
2016-03-16
Rising rates of smartphone ownership highlight opportunities for improved mobile application usage in clinical trials. While current methods call for device provisioning, the "bring your own device" (BYOD) model permits participants to use personal phones allowing for improved patient engagement and lowered operational costs. However, more evidence is needed to demonstrate the BYOD model's feasibility in research settings. To assess if CentrosHealth, a mobile application designed to support trial compliance, produces different outcomes in medication adherence and application engagement when distributed through study-provisioned devices compared to the BYOD model. 87 participants were randomly selected to use the mobile application or no intervention for a 28-day pilot study at a 2:1 randomization ratio (2 intervention: 1 control) and asked to consume a twice-daily probiotic supplement. The application users were further randomized into two groups: receiving the application on a personal "BYOD" or study-provided smartphone. In-depth interviews were performed in a randomly-selected subset of the intervention group (five BYOD and five study-provided smartphone users). The BYOD subgroup showed significantly greater engagement than study-provided phone users, as shown by higher application use frequency and duration over the study period. The BYOD subgroup also demonstrated a significant effect of engagement on medication adherence for number of application sessions (unstandardized regression coefficient beta=0.0006, p=0.02) and time spent therein (beta=0.00001, p=0.03). Study-provided phone users showed higher initial adherence rates, but greater decline (5.7%) than BYOD users (0.9%) over the study period. In-depth interviews revealed that participants preferred the BYOD model over using study-provided devices. Results indicate that the BYOD model is feasible in health research settings and improves participant experience, calling for further BYOD model validity assessment. Although group differences in medication adherence decline were insignificant, the greater trend of decline in provisioned device users warrants further investigation to determine if trends reach significance over time. Significantly higher application engagement rates and effect of engagement on medication adherence in the BYOD subgroup similarly imply that greater application engagement may correlate to better medication adherence over time.
Humphreys, Keith; Blodgett, Janet C.; Wagner, Todd H.
2014-01-01
Background Observational studies of Alcoholics Anonymous’ (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study therefore employed an innovative statistical technique to derive a selection bias-free estimate of AA’s impact. Methods Six datasets from 5 National Institutes of Health-funded randomized trials (one with two independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol dependent individuals in one of the datasets (n = 774) were analyzed separately from the rest of sample (n = 1582 individuals pooled from 5 datasets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Results Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In five of the six data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = .38, p = .001) and 15-month (B = 0.42, p = .04) follow-up. However, in the remaining dataset, in which pre-existing AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. Conclusions For most individuals seeking help for alcohol problems, increasing AA attendance leads to short and long term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high pre-existing AA involvement, further increases in AA attendance may have little impact. PMID:25421504
Humphreys, Keith; Blodgett, Janet C; Wagner, Todd H
2014-11-01
Observational studies of Alcoholics Anonymous' (AA) effectiveness are vulnerable to self-selection bias because individuals choose whether or not to attend AA. The present study, therefore, employed an innovative statistical technique to derive a selection bias-free estimate of AA's impact. Six data sets from 5 National Institutes of Health-funded randomized trials (1 with 2 independent parallel arms) of AA facilitation interventions were analyzed using instrumental variables models. Alcohol-dependent individuals in one of the data sets (n = 774) were analyzed separately from the rest of sample (n = 1,582 individuals pooled from 5 data sets) because of heterogeneity in sample parameters. Randomization itself was used as the instrumental variable. Randomization was a good instrument in both samples, effectively predicting increased AA attendance that could not be attributed to self-selection. In 5 of the 6 data sets, which were pooled for analysis, increased AA attendance that was attributable to randomization (i.e., free of self-selection bias) was effective at increasing days of abstinence at 3-month (B = 0.38, p = 0.001) and 15-month (B = 0.42, p = 0.04) follow-up. However, in the remaining data set, in which preexisting AA attendance was much higher, further increases in AA involvement caused by the randomly assigned facilitation intervention did not affect drinking outcome. For most individuals seeking help for alcohol problems, increasing AA attendance leads to short- and long-term decreases in alcohol consumption that cannot be attributed to self-selection. However, for populations with high preexisting AA involvement, further increases in AA attendance may have little impact. Copyright © 2014 by the Research Society on Alcoholism.
Defining an essence of structure determining residue contacts in proteins.
Sathyapriya, R; Duarte, Jose M; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2009-12-01
The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this "structural essence" has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts-such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed "cone-peeling" that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 A Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This "structural essence" opens new avenues in the fields of structure prediction, empirical potentials and docking.
Defining an Essence of Structure Determining Residue Contacts in Proteins
Sathyapriya, R.; Duarte, Jose M.; Stehr, Henning; Filippis, Ioannis; Lappe, Michael
2009-01-01
The network of native non-covalent residue contacts determines the three-dimensional structure of a protein. However, not all contacts are of equal structural significance, and little knowledge exists about a minimal, yet sufficient, subset required to define the global features of a protein. Characterisation of this “structural essence” has remained elusive so far: no algorithmic strategy has been devised to-date that could outperform a random selection in terms of 3D reconstruction accuracy (measured as the Ca RMSD). It is not only of theoretical interest (i.e., for design of advanced statistical potentials) to identify the number and nature of essential native contacts—such a subset of spatial constraints is very useful in a number of novel experimental methods (like EPR) which rely heavily on constraint-based protein modelling. To derive accurate three-dimensional models from distance constraints, we implemented a reconstruction pipeline using distance geometry. We selected a test-set of 12 protein structures from the four major SCOP fold classes and performed our reconstruction analysis. As a reference set, series of random subsets (ranging from 10% to 90% of native contacts) are generated for each protein, and the reconstruction accuracy is computed for each subset. We have developed a rational strategy, termed “cone-peeling” that combines sequence features and network descriptors to select minimal subsets that outperform the reference sets. We present, for the first time, a rational strategy to derive a structural essence of residue contacts and provide an estimate of the size of this minimal subset. Our algorithm computes sparse subsets capable of determining the tertiary structure at approximately 4.8 Å Ca RMSD with as little as 8% of the native contacts (Ca-Ca and Cb-Cb). At the same time, a randomly chosen subset of native contacts needs about twice as many contacts to reach the same level of accuracy. This “structural essence” opens new avenues in the fields of structure prediction, empirical potentials and docking. PMID:19997489
Extinction from a paleontological perspective
NASA Technical Reports Server (NTRS)
Raup, D. M.
1993-01-01
Extinction of widespread species is common in evolutionary time (millions of years) but rare in ecological time (hundreds or thousands of years). In the fossil record, there appears to be a smooth continuum between background and mass extinction; and the clustering of extinctions at mass extinctions cannot be explained by the chance coincidence of independent events. Although some extinction is selective, much is apparently random in that survivors have no recognizable superiority over victims. Extinction certainly plays an important role in evolution, but whether it is constructive or destructive has not yet been determined.
Heßelmann, Andreas
2015-04-14
Molecular excitation energies have been calculated with time-dependent density-functional theory (TDDFT) using random-phase approximation Hessians augmented with exact exchange contributions in various orders. It has been observed that this approach yields fairly accurate local valence excitations if combined with accurate asymptotically corrected exchange-correlation potentials used in the ground-state Kohn-Sham calculations. The inclusion of long-range particle-particle with hole-hole interactions in the kernel leads to errors of 0.14 eV only for the lowest excitations of a selection of three alkene, three carbonyl, and five azabenzene molecules, thus surpassing the accuracy of a number of common TDDFT and even some wave function correlation methods. In the case of long-range charge-transfer excitations, the method typically underestimates accurate reference excitation energies by 8% on average, which is better than with standard hybrid-GGA functionals but worse compared to range-separated functional approximations.
Spielman, Stephanie J; Wilke, Claus O
2016-11-01
The mutation-selection model of coding sequence evolution has received renewed attention for its use in estimating site-specific amino acid propensities and selection coefficient distributions. Two computationally tractable mutation-selection inference frameworks have been introduced: One framework employs a fixed-effects, highly parameterized maximum likelihood approach, whereas the other employs a random-effects Bayesian Dirichlet Process approach. While both implementations follow the same model, they appear to make distinct predictions about the distribution of selection coefficients. The fixed-effects framework estimates a large proportion of highly deleterious substitutions, whereas the random-effects framework estimates that all substitutions are either nearly neutral or weakly deleterious. It remains unknown, however, how accurately each method infers evolutionary constraints at individual sites. Indeed, selection coefficient distributions pool all site-specific inferences, thereby obscuring a precise assessment of site-specific estimates. Therefore, in this study, we use a simulation-based strategy to determine how accurately each approach recapitulates the selective constraint at individual sites. We find that the fixed-effects approach, despite its extensive parameterization, consistently and accurately estimates site-specific evolutionary constraint. By contrast, the random-effects Bayesian approach systematically underestimates the strength of natural selection, particularly for slowly evolving sites. We also find that, despite the strong differences between their inferred selection coefficient distributions, the fixed- and random-effects approaches yield surprisingly similar inferences of site-specific selective constraint. We conclude that the fixed-effects mutation-selection framework provides the more reliable software platform for model application and future development. © The Author 2016. Published by Oxford University Press on behalf of the Society for Molecular Biology and Evolution. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Using trading strategies to detect phase transitions in financial markets.
Forró, Z; Woodard, R; Sornette, D
2015-04-01
We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.
Using trading strategies to detect phase transitions in financial markets
NASA Astrophysics Data System (ADS)
Forró, Z.; Woodard, R.; Sornette, D.
2015-04-01
We show that the log-periodic power law singularity model (LPPLS), a mathematical embodiment of positive feedbacks between agents and of their hierarchical dynamical organization, has a significant predictive power in financial markets. We find that LPPLS-based strategies significantly outperform the randomized ones and that they are robust with respect to a large selection of assets and time periods. The dynamics of prices thus markedly deviate from randomness in certain pockets of predictability that can be associated with bubble market regimes. Our hybrid approach, marrying finance with the trading strategies, and critical phenomena with LPPLS, demonstrates that targeting information related to phase transitions enables the forecast of financial bubbles and crashes punctuating the dynamics of prices.
Evolving artificial metalloenzymes via random mutagenesis
NASA Astrophysics Data System (ADS)
Yang, Hao; Swartz, Alan M.; Park, Hyun June; Srivastava, Poonam; Ellis-Guardiola, Ken; Upp, David M.; Lee, Gihoon; Belsare, Ketaki; Gu, Yifan; Zhang, Chen; Moellering, Raymond E.; Lewis, Jared C.
2018-03-01
Random mutagenesis has the potential to optimize the efficiency and selectivity of protein catalysts without requiring detailed knowledge of protein structure; however, introducing synthetic metal cofactors complicates the expression and screening of enzyme libraries, and activity arising from free cofactor must be eliminated. Here we report an efficient platform to create and screen libraries of artificial metalloenzymes (ArMs) via random mutagenesis, which we use to evolve highly selective dirhodium cyclopropanases. Error-prone PCR and combinatorial codon mutagenesis enabled multiplexed analysis of random mutations, including at sites distal to the putative ArM active site that are difficult to identify using targeted mutagenesis approaches. Variants that exhibited significantly improved selectivity for each of the cyclopropane product enantiomers were identified, and higher activity than previously reported ArM cyclopropanases obtained via targeted mutagenesis was also observed. This improved selectivity carried over to other dirhodium-catalysed transformations, including N-H, S-H and Si-H insertion, demonstrating that ArMs evolved for one reaction can serve as starting points to evolve catalysts for others.
Application of random coherence order selection in gradient-enhanced multidimensional NMR
NASA Astrophysics Data System (ADS)
Bostock, Mark J.; Nietlispach, Daniel
2016-03-01
Development of multidimensional NMR is essential to many applications, for example in high resolution structural studies of biomolecules. Multidimensional techniques enable separation of NMR signals over several dimensions, improving signal resolution, whilst also allowing identification of new connectivities. However, these advantages come at a significant cost. The Fourier transform theorem requires acquisition of a grid of regularly spaced points to satisfy the Nyquist criterion, while frequency discrimination and acquisition of a pure phase spectrum require acquisition of both quadrature components for each time point in every indirect (non-acquisition) dimension, adding a factor of 2 N -1 to the number of free- induction decays which must be acquired, where N is the number of dimensions. Compressed sensing (CS) ℓ 1-norm minimisation in combination with non-uniform sampling (NUS) has been shown to be extremely successful in overcoming the Nyquist criterion. Previously, maximum entropy reconstruction has also been used to overcome the limitation of frequency discrimination, processing data acquired with only one quadrature component at a given time interval, known as random phase detection (RPD), allowing a factor of two reduction in the number of points for each indirect dimension (Maciejewski et al. 2011 PNAS 108 16640). However, whilst this approach can be easily applied in situations where the quadrature components are acquired as amplitude modulated data, the same principle is not easily extended to phase modulated (P-/N-type) experiments where data is acquired in the form exp (iωt) or exp (-iωt), and which make up many of the multidimensional experiments used in modern NMR. Here we demonstrate a modification of the CS ℓ 1-norm approach to allow random coherence order selection (RCS) for phase modulated experiments; we generalise the nomenclature for RCS and RPD as random quadrature detection (RQD). With this method, the power of RQD can be extended to the full suite of experiments available to modern NMR spectroscopy, allowing resolution enhancements for all indirect dimensions; alone or in combination with NUS, RQD can be used to improve experimental resolution, or shorten experiment times, of considerable benefit to the challenging applications undertaken by modern NMR.
Refernce Conditions for Streams in the Grand Prairie Natural Division of Illinois
NASA Astrophysics Data System (ADS)
Sangunett, B.; Dewalt, R.
2005-05-01
As part of the Critical Trends Assessment Program (CTAP) of the Illinois Department of Natural Resources (IDNR), 12 potential reference quality stream sites in the Grand Prairie Natural Division were evaluated in May 2004. This agriculturally dominated region, located in east central Illinois, is the most highly modified in the state. The quality of these sites was assessed using a modified Hilsenhoff Biotic Index and species richness of Ephemeroptera, Plecoptera, and Trichoptera (EPT) insect orders and a 12 parameter Habitat Quality Index (HQI). Illinois EPA high quality fish stations, Illinois Natural History Survey insect collection data, and best professional knowledge were used to choose which streams to evaluate. For analysis, reference quality streams were compared to 37 randomly selected meandering streams and 26 randomly selected channelized streams which were assessed by CTAP between 1997 and 2001. The results showed that the reference streams exceeded both taxa richness and habitat quality of randomly selected streams in the region. Both random meandering sites and reference quality sites increased in taxa richness and HQI as stream width increased. Randomly selected channelized streams had about the same taxa richness and HQI regardless of width.
Key Aspects of Nucleic Acid Library Design for in Vitro Selection
Vorobyeva, Maria A.; Davydova, Anna S.; Vorobjev, Pavel E.; Pyshnyi, Dmitrii V.; Venyaminova, Alya G.
2018-01-01
Nucleic acid aptamers capable of selectively recognizing their target molecules have nowadays been established as powerful and tunable tools for biospecific applications, be it therapeutics, drug delivery systems or biosensors. It is now generally acknowledged that in vitro selection enables one to generate aptamers to almost any target of interest. However, the success of selection and the affinity of the resulting aptamers depend to a large extent on the nature and design of an initial random nucleic acid library. In this review, we summarize and discuss the most important features of the design of nucleic acid libraries for in vitro selection such as the nature of the library (DNA, RNA or modified nucleotides), the length of a randomized region and the presence of fixed sequences. We also compare and contrast different randomization strategies and consider computer methods of library design and some other aspects. PMID:29401748
Efficient Constant-Time Complexity Algorithm for Stochastic Simulation of Large Reaction Networks.
Thanh, Vo Hong; Zunino, Roberto; Priami, Corrado
2017-01-01
Exact stochastic simulation is an indispensable tool for a quantitative study of biochemical reaction networks. The simulation realizes the time evolution of the model by randomly choosing a reaction to fire and update the system state according to a probability that is proportional to the reaction propensity. Two computationally expensive tasks in simulating large biochemical networks are the selection of next reaction firings and the update of reaction propensities due to state changes. We present in this work a new exact algorithm to optimize both of these simulation bottlenecks. Our algorithm employs the composition-rejection on the propensity bounds of reactions to select the next reaction firing. The selection of next reaction firings is independent of the number reactions while the update of propensities is skipped and performed only when necessary. It therefore provides a favorable scaling for the computational complexity in simulating large reaction networks. We benchmark our new algorithm with the state of the art algorithms available in literature to demonstrate its applicability and efficiency.
Improved knowledge diffusion model based on the collaboration hypernetwork
NASA Astrophysics Data System (ADS)
Wang, Jiang-Pan; Guo, Qiang; Yang, Guang-Yong; Liu, Jian-Guo
2015-06-01
The process for absorbing knowledge becomes an essential element for innovation in firms and in adapting to changes in the competitive environment. In this paper, we present an improved knowledge diffusion hypernetwork (IKDH) model based on the idea that knowledge will spread from the target node to all its neighbors in terms of the hyperedge and knowledge stock. We apply the average knowledge stock V(t) , the variable σ2(t) , and the variance coefficient c(t) to evaluate the performance of knowledge diffusion. By analyzing different knowledge diffusion ways, selection ways of the highly knowledgeable nodes, hypernetwork sizes and hypernetwork structures for the performance of knowledge diffusion, results show that the diffusion speed of IKDH model is 3.64 times faster than that of traditional knowledge diffusion (TKDH) model. Besides, it is three times faster to diffuse knowledge by randomly selecting "expert" nodes than that by selecting large-hyperdegree nodes as "expert" nodes. Furthermore, either the closer network structure or smaller network size results in the faster knowledge diffusion.
Methods and analysis of realizing randomized grouping.
Hu, Liang-Ping; Bao, Xiao-Lei; Wang, Qi
2011-07-01
Randomization is one of the four basic principles of research design. The meaning of randomization includes two aspects: one is to randomly select samples from the population, which is known as random sampling; the other is to randomly group all the samples, which is called randomized grouping. Randomized grouping can be subdivided into three categories: completely, stratified and dynamically randomized grouping. This article mainly introduces the steps of complete randomization, the definition of dynamic randomization and the realization of random sampling and grouping by SAS software.
NASA Astrophysics Data System (ADS)
Johnson, Traci L.; Sharon, Keren
2016-11-01
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading as to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.
Longitudinal analyses of correlated response efficiencies of fillet traits in Nile tilapia.
Turra, E M; Fernandes, A F A; de Alvarenga, E R; Teixeira, E A; Alves, G F O; Manduca, L G; Murphy, T W; Silva, M A
2018-03-01
Recent studies with Nile tilapia have shown divergent results regarding the possibility of selecting on morphometric measurements to promote indirect genetic gains in fillet yield (FY). The use of indirect selection for fillet traits is important as these traits are only measurable after harvesting. Random regression models are a powerful tool in association studies to identify the best time point to measure and select animals. Random regression models can also be applied in a multiple trait approach to analyze indirect response to selection, which would avoid the need to sacrifice candidate fish. Therefore, the aim of this study was to investigate the genetic relationships between several body measurements, weight and fillet traits throughout the growth period and to evaluate the possibility of indirect selection for fillet traits in Nile tilapia. Data were collected from 2042 fish and was divided into two subsets. The first subset was used to estimate genetic parameters, including the permanent environmental effect for BW and body measurements (8758 records for each body measurement, as each fish was individually weighed and measured a maximum of six times). The second subset (2042 records for each trait) was used to estimate genetic correlations and heritabilities, which enabled the calculation of correlated response efficiencies between body measurements and the fillet traits. Heritability estimates across ages ranged from 0.05 to 0.5 for height, 0.02 to 0.48 for corrected length (CL), 0.05 to 0.68 for width, 0.08 to 0.57 for fillet weight (FW) and 0.12 to 0.42 for FY. All genetic correlation estimates between body measurements and FW were positive and strong (0.64 to 0.98). The estimates of genetic correlation between body measurements and FY were positive (except for CL at some ages), but weak to moderate (-0.08 to 0.68). These estimates resulted in strong and favorable correlated response efficiencies for FW and positive, but moderate for FY. These results indicate the possibility of achieving indirect genetic gains for FW and by selecting for morphometric traits, but low efficiency for FY when compared with direct selection.
Thermodynamic method for generating random stress distributions on an earthquake fault
Barall, Michael; Harris, Ruth A.
2012-01-01
This report presents a new method for generating random stress distributions on an earthquake fault, suitable for use as initial conditions in a dynamic rupture simulation. The method employs concepts from thermodynamics and statistical mechanics. A pattern of fault slip is considered to be analogous to a micro-state of a thermodynamic system. The energy of the micro-state is taken to be the elastic energy stored in the surrounding medium. Then, the Boltzmann distribution gives the probability of a given pattern of fault slip and stress. We show how to decompose the system into independent degrees of freedom, which makes it computationally feasible to select a random state. However, due to the equipartition theorem, straightforward application of the Boltzmann distribution leads to a divergence which predicts infinite stress. To avoid equipartition, we show that the finite strength of the fault acts to restrict the possible states of the system. By analyzing a set of earthquake scaling relations, we derive a new formula for the expected power spectral density of the stress distribution, which allows us to construct a computer algorithm free of infinities. We then present a new technique for controlling the extent of the rupture by generating a random stress distribution thousands of times larger than the fault surface, and selecting a portion which, by chance, has a positive stress perturbation of the desired size. Finally, we present a new two-stage nucleation method that combines a small zone of forced rupture with a larger zone of reduced fracture energy.
Hurtado, R; Celani, M; Geber, S
2016-10-01
To evaluate the effect of short-term hormone replacement therapy with 0.625 mg conjugated estrogens daily on endothelial function of healthy postmenopausal women, using flow-mediated dilation (FMD) of the brachial artery. We performed a double-blinded, randomized, controlled trial over 3 years. Randomization was performed using computer-generated sorting. All participants were blinded to the use of conjugated equine estrogens (CEE) or placebo and FMD was assessed by a blinded examiner, before and after 28 days of medication. A total of 64 healthy postmenopausal women were selected and randomly assigned into two groups of treatment: 0.625 mg of CEE or placebo. FMD values were statistically different between the groups (p = 0.025): the group receiving CEE showed a FMD value of 0.011 compared to the placebo group (FMD = -0.082). The two groups were additionally evaluated for homogeneity through the Shapiro-Wilk test in respect to variables that could interfere with endothelial function such as age (p = 0.729), body mass index (p = 0.891), and time since menopause (p = 0.724). Other variables were excluded during selection of the participants such as chronic vascular conditions, smoking, and sedentary lifestyle. Our results demonstrate that the administration of 0.625 mg CEE for 28 days is effective in improving vascular nitric oxide-dependent dilation assessed by FMD of the brachial artery in postmenopausal women. NCT01482416.
Vortex-Core Reversal Dynamics: Towards Vortex Random Access Memory
NASA Astrophysics Data System (ADS)
Kim, Sang-Koog
2011-03-01
An energy-efficient, ultrahigh-density, ultrafast, and nonvolatile solid-state universal memory is a long-held dream in the field of information-storage technology. The magnetic random access memory (MRAM) along with a spin-transfer-torque switching mechanism is a strong candidate-means of realizing that dream, given its nonvolatility, infinite endurance, and fast random access. Magnetic vortices in patterned soft magnetic dots promise ground-breaking applications in information-storage devices, owing to the very stable twofold ground states of either their upward or downward core magnetization orientation and plausible core switching by in-plane alternating magnetic fields or spin-polarized currents. However, two technologically most important but very challenging issues --- low-power recording and reliable selection of each memory cell with already existing cross-point architectures --- have not yet been resolved for the basic operations in information storage, that is, writing (recording) and readout. Here, we experimentally demonstrate a magnetic vortex random access memory (VRAM) in the basic cross-point architecture. This unique VRAM offers reliable cell selection and low-power-consumption control of switching of out-of-plane core magnetizations using specially designed rotating magnetic fields generated by two orthogonal and unipolar Gaussian-pulse currents along with optimized pulse width and time delay. Our achievement of a new device based on a new material, that is, a medium composed of patterned vortex-state disks, together with the new physics on ultrafast vortex-core switching dynamics, can stimulate further fruitful research on MRAMs that are based on vortex-state dot arrays.
Reward and uncertainty in exploration programs
NASA Technical Reports Server (NTRS)
Kaufman, G. M.; Bradley, P. G.
1971-01-01
A set of variables which are crucial to the economic outcome of petroleum exploration are discussed. These are treated as random variables; the values they assume indicate the number of successes that occur in a drilling program and determine, for a particular discovery, the unit production cost and net economic return if that reservoir is developed. In specifying the joint probability law for those variables, extreme and probably unrealistic assumptions are made. In particular, the different random variables are assumed to be independently distributed. Using postulated probability functions and specified parameters, values are generated for selected random variables, such as reservoir size. From this set of values the economic magnitudes of interest, net return and unit production cost are computed. This constitutes a single trial, and the procedure is repeated many times. The resulting histograms approximate the probability density functions of the variables which describe the economic outcomes of an exploratory drilling program.
Hazard Function Estimation with Cause-of-Death Data Missing at Random
Wang, Qihua; Dinse, Gregg E.; Liu, Chunling
2010-01-01
Hazard function estimation is an important part of survival analysis. Interest often centers on estimating the hazard function associated with a particular cause of death. We propose three nonparametric kernel estimators for the hazard function, all of which are appropriate when death times are subject to random censorship and censoring indicators can be missing at random. Specifically, we present a regression surrogate estimator, an imputation estimator, and an inverse probability weighted estimator. All three estimators are uniformly strongly consistent and asymptotically normal. We derive asymptotic representations of the mean squared error and the mean integrated squared error for these estimators and we discuss a data-driven bandwidth selection method. A simulation study, conducted to assess finite sample behavior, demonstrates that the proposed hazard estimators perform relatively well. We illustrate our methods with an analysis of some vascular disease data. PMID:22267874
Predicting the random drift of MEMS gyroscope based on K-means clustering and OLS RBF Neural Network
NASA Astrophysics Data System (ADS)
Wang, Zhen-yu; Zhang, Li-jie
2017-10-01
Measure error of the sensor can be effectively compensated with prediction. Aiming at large random drift error of MEMS(Micro Electro Mechanical System))gyroscope, an improved learning algorithm of Radial Basis Function(RBF) Neural Network(NN) based on K-means clustering and Orthogonal Least-Squares (OLS) is proposed in this paper. The algorithm selects the typical samples as the initial cluster centers of RBF NN firstly, candidates centers with K-means algorithm secondly, and optimizes the candidate centers with OLS algorithm thirdly, which makes the network structure simpler and makes the prediction performance better. Experimental results show that the proposed K-means clustering OLS learning algorithm can predict the random drift of MEMS gyroscope effectively, the prediction error of which is 9.8019e-007°/s and the prediction time of which is 2.4169e-006s
Evolution of basic equations for nearshore wave field
ISOBE, Masahiko
2013-01-01
In this paper, a systematic, overall view of theories for periodic waves of permanent form, such as Stokes and cnoidal waves, is described first with their validity ranges. To deal with random waves, a method for estimating directional spectra is given. Then, various wave equations are introduced according to the assumptions included in their derivations. The mild-slope equation is derived for combined refraction and diffraction of linear periodic waves. Various parabolic approximations and time-dependent forms are proposed to include randomness and nonlinearity of waves as well as to simplify numerical calculation. Boussinesq equations are the equations developed for calculating nonlinear wave transformations in shallow water. Nonlinear mild-slope equations are derived as a set of wave equations to predict transformation of nonlinear random waves in the nearshore region. Finally, wave equations are classified systematically for a clear theoretical understanding and appropriate selection for specific applications. PMID:23318680
Observational studies of patients in the emergency department: a comparison of 4 sampling methods.
Valley, Morgan A; Heard, Kennon J; Ginde, Adit A; Lezotte, Dennis C; Lowenstein, Steven R
2012-08-01
We evaluate the ability of 4 sampling methods to generate representative samples of the emergency department (ED) population. We analyzed the electronic records of 21,662 consecutive patient visits at an urban, academic ED. From this population, we simulated different models of study recruitment in the ED by using 2 sample sizes (n=200 and n=400) and 4 sampling methods: true random, random 4-hour time blocks by exact sample size, random 4-hour time blocks by a predetermined number of blocks, and convenience or "business hours." For each method and sample size, we obtained 1,000 samples from the population. Using χ(2) tests, we measured the number of statistically significant differences between the sample and the population for 8 variables (age, sex, race/ethnicity, language, triage acuity, arrival mode, disposition, and payer source). Then, for each variable, method, and sample size, we compared the proportion of the 1,000 samples that differed from the overall ED population to the expected proportion (5%). Only the true random samples represented the population with respect to sex, race/ethnicity, triage acuity, mode of arrival, language, and payer source in at least 95% of the samples. Patient samples obtained using random 4-hour time blocks and business hours sampling systematically differed from the overall ED patient population for several important demographic and clinical variables. However, the magnitude of these differences was not large. Common sampling strategies selected for ED-based studies may affect parameter estimates for several representative population variables. However, the potential for bias for these variables appears small. Copyright © 2012. Published by Mosby, Inc.
Chen, Kai; Lynen, Frédéric; De Beer, Maarten; Hitzel, Laure; Ferguson, Paul; Hanna-Brown, Melissa; Sandra, Pat
2010-11-12
Stationary phase optimized selectivity liquid chromatography (SOSLC) is a promising technique to optimize the selectivity of a given separation by using a combination of different stationary phases. Previous work has shown that SOSLC offers excellent possibilities for method development, especially after the recent modification towards linear gradient SOSLC. The present work is aimed at developing and extending the SOSLC approach towards selectivity optimization and method development for green chromatography. Contrary to current LC practices, a green mobile phase (water/ethanol/formic acid) is hereby preselected and the composition of the stationary phase is optimized under a given gradient profile to obtain baseline resolution of all target solutes in the shortest possible analysis time. With the algorithm adapted to the high viscosity property of ethanol, the principle is illustrated with a fast, full baseline resolution for a randomly selected mixture composed of sulphonamides, xanthine alkaloids and steroids. Copyright © 2010 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal.
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-05
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced. Copyright © 2018 Elsevier B.V. All rights reserved.
An active learning representative subset selection method using net analyte signal
NASA Astrophysics Data System (ADS)
He, Zhonghai; Ma, Zhenhe; Luan, Jingmin; Cai, Xi
2018-05-01
To guarantee accurate predictions, representative samples are needed when building a calibration model for spectroscopic measurements. However, in general, it is not known whether a sample is representative prior to measuring its concentration, which is both time-consuming and expensive. In this paper, a method to determine whether a sample should be selected into a calibration set is presented. The selection is based on the difference of Euclidean norm of net analyte signal (NAS) vector between the candidate and existing samples. First, the concentrations and spectra of a group of samples are used to compute the projection matrix, NAS vector, and scalar values. Next, the NAS vectors of candidate samples are computed by multiplying projection matrix with spectra of samples. Scalar value of NAS is obtained by norm computation. The distance between the candidate set and the selected set is computed, and samples with the largest distance are added to selected set sequentially. Last, the concentration of the analyte is measured such that the sample can be used as a calibration sample. Using a validation test, it is shown that the presented method is more efficient than random selection. As a result, the amount of time and money spent on reference measurements is greatly reduced.
Early Discharge and Home Care After Unplanned Cesarean Birth: Nursing Care Time
Brooten, Dorothy; Knapp, Helen; Borucki, Lynne; Jacobsen, Barbara; Finkler, Steven; Arnold, Lauren; Mennuti, Michael
2013-01-01
Objective This study examined the mean nursing time spent providing discharge planning and home care to women who delivered by unplanned cesarean birth and examined differences in nursing time required by women with and without morbidity. Design A secondary analysis of nursing time from a randomized trial of transitional care (discharge planning and home follow-up) provided to women after cesarean delivery. Setting An urban tertiary-care hospital. Patients The sample (N = 61) of black and white women who had unplanned cesarean births and their full-term newborns was selected randomly. Forty-four percent of the women had experienced pregnancy complications. Interventions Advanced practice nurses provided discharge planning and 8-week home follow-up consisting of home visits, telephone outreach, and daily telephone availability. Outcome Measure Nursing time required was dictated by patient need and provider judgment rather than by reimbursement plan. Results More than half of the women required more than two home visits; mean home visit time was 1 hour. For women who experienced morbidity mean discharge planning time was 20 minutes more and mean home visit time 40 minutes more. Conclusions Current health care services that provide one or two 1-hour home visits to childbearing women at high risk may not be meeting the education and resource needs of this group. PMID:8892128
On the information content of hydrological signatures and their relationship to catchment attributes
NASA Astrophysics Data System (ADS)
Addor, Nans; Clark, Martyn P.; Prieto, Cristina; Newman, Andrew J.; Mizukami, Naoki; Nearing, Grey; Le Vine, Nataliya
2017-04-01
Hydrological signatures, which are indices characterizing hydrologic behavior, are increasingly used for the evaluation, calibration and selection of hydrological models. Their key advantage is to provide more direct insights into specific hydrological processes than aggregated metrics (e.g., the Nash-Sutcliffe efficiency). A plethora of signatures now exists, which enable characterizing a variety of hydrograph features, but also makes the selection of signatures for new studies challenging. Here we propose that the selection of signatures should be based on their information content, which we estimated using several approaches, all leading to similar conclusions. To explore the relationship between hydrological signatures and the landscape, we extended a previously published data set of hydrometeorological time series for 671 catchments in the contiguous United States, by characterizing the climatic conditions, topography, soil, vegetation and stream network of each catchment. This new catchment attributes data set will soon be in open access, and we are looking forward to introducing it to the community. We used this data set in a data-learning algorithm (random forests) to explore whether hydrological signatures could be inferred from catchment attributes alone. We find that some signatures can be predicted remarkably well by random forests and, interestingly, the same signatures are well captured when simulating discharge using a conceptual hydrological model. We discuss what this result reveals about our understanding of hydrological processes shaping hydrological signatures. We also identify which catchment attributes exert the strongest control on catchment behavior, in particular during extreme hydrological events. Overall, climatic attributes have the most significant influence, and strongly condition how well hydrological signatures can be predicted by random forests and simulated by the hydrological model. In contrast, soil characteristics at the catchment scale are not found to be significant predictors by random forests, which raises questions on how to best use soil data for hydrological modeling, for instance for parameter estimation. We finally demonstrate that signatures with high spatial variability are poorly captured by random forests and model simulations, which makes their regionalization delicate. We conclude with a ranking of signatures based on their information content, and propose that the signatures with high information content are best suited for model calibration, model selection and understanding hydrologic similarity.
Crampin, A C; Mwinuka, V; Malema, S S; Glynn, J R; Fine, P E
2001-01-01
Selection bias, particularly of controls, is common in case-control studies and may materially affect the results. Methods of control selection should be tailored both for the risk factors and disease under investigation and for the population being studied. We present here a control selection method devised for a case-control study of tuberculosis in rural Africa (Karonga, northern Malawi) that selects an age/sex frequency-matched random sample of the population, with a geographical distribution in proportion to the population density. We also present an audit of the selection process, and discuss the potential of this method in other settings.
Grant, Edward M.; Young, Deborah Rohm; Wu, Tong Tong
2015-01-01
We examined associations among longitudinal, multilevel variables and girls’ physical activity to determine the important predictors for physical activity change at different adolescent ages. The Trial of Activity for Adolescent Girls 2 study (Maryland) contributed participants from 8th (2009) to 11th grade (2011) (n=561). Questionnaires were used to obtain demographic, and psychosocial information (individual- and social-level variables); height, weight, and triceps skinfold to assess body composition; interviews and surveys for school-level data; and self-report for neighborhood-level variables. Moderate to vigorous physical activity minutes were assessed from accelerometers. A doubly regularized linear mixed effects model was used for the longitudinal multilevel data to identify the most important covariates for physical activity. Three fixed effects at the individual level and one random effect at the school level were chosen from an initial total of 66 variables, consisting of 47 fixed effects and 19 random effects variables, in additional to the time effect. Self-management strategies, perceived barriers, and social support from friends were the three selected fixed effects, and whether intramural or interscholastic programs were offered in middle school was the selected random effect. Psychosocial factors and friend support, plus a school’s physical activity environment, affect adolescent girl’s moderate to vigorous physical activity longitudinally. PMID:25928064
Papageorgiou, Spyridon N; Kloukos, Dimitrios; Petridis, Haralampos; Pandis, Nikolaos
2015-01-01
The objective of this study was to assess the risk of bias of randomized controlled trials (RCTs) published in prosthodontic and implant dentistry journals. The last 30 issues of 9 journals in the field of prosthodontic and implant dentistry (Clinical Implant Dentistry and Related Research, Clinical Oral Implants Research, Implant Dentistry, International Journal of Oral & Maxillofacial Implants, International Journal of Periodontics and Restorative Dentistry, International Journal of Prosthodontics, Journal of Dentistry, Journal of Oral Rehabilitation, and Journal of Prosthetic Dentistry) were hand-searched for RCTs. Risk of bias was assessed using the Cochrane Collaboration's risk of bias tool and analyzed descriptively. From the 3,667 articles screened, a total of 147 RCTs were identified and included. The number of published RCTs increased with time. The overall distribution of a high risk of bias assessment varied across the domains of the Cochrane risk of bias tool: 8% for random sequence generation, 18% for allocation concealment, 41% for masking, 47% for blinding of outcome assessment, 7% for incomplete outcome data, 12% for selective reporting, and 41% for other biases. The distribution of high risk of bias for RCTs published in the selected prosthodontic and implant dentistry journals varied among journals and ranged from 8% to 47%, which can be considered as substantial.
A Probabilistic Approach to Uncertainty Analysis in NTPR Radiation Dose Assessments
2009-11-01
Zeman, members of the Subcommittee on Dose Reconstruction of the Veterans’ Advisory Board on Dose Reconstruction (VBDR) for their critical review of...production and decay of fission products, activation products, and actinides . (It is generally assumed in these calculations that no 30 Time...histories”), and on each history selecting random values from each of the pdf’s, they were able to conduct “numerical experiments” and derive critical
ERIC Educational Resources Information Center
Bloom, Howard S.; Rico, James A.
This paper describes a place-based research demonstration program to promote and sustain employment among residents of selected public housing developments in U.S. cities. Because all eligible residents of the participating public housing developments were free to take part in the program, it was not possible to study its impacts in a classical…
ERIC Educational Resources Information Center
Lutz, William D.
The question of whether a significant amount of time could be saved if freshman composition were taught with a programed text was studied. Two sections of English I were randomly selected from the regular class schedule. Class A was taught using the usual syllabus and texts. Class B was taught using the same syllabus and texts with one exception.…
Effect of the federal estate tax on nonindustrial private forest holdings
John L. Greene; Steven H. Bullard; Tamara L. Cushing; Theodore Beauvais
2006-01-01
Data for this study were collected using a questionnaire mailed to randomly selected members of two forest owner organizations. Among the key findings is that 38% of forest estates owed federal estate tax, a rate many times higher than US estates in general. In 28% of the cases where estate tax was due, timber or land was sold because other assets were not adequate. In...
Factors that Can Help Select the Timing for Decompressive Hemicraniectomy for Malignant MCA Stroke.
Kamran, Saadat; Salam, Abdul; Akhtar, Naveed; Alboudi, Ayman; Kamran, Kainat; Singh, Rajvir; Amir, Numan; Inshasi, Jihad; Qidwai, Uwais; Malik, Rayaz A; Shuaib, Ashfaq
2018-03-06
In patients with malignant middle cerebral artery (MMCA) stroke, a vital clinically relevant question is determination of the speed with which infarction evolves to select the time for decompressive hemicraniectomy [DHC]. A retrospective, multicenter cross-sectional study of patients referred for DHC, based on the criteria of randomized controlled trials, was undertaken to identify factors for selecting the timing of DHC in MMCA stroke, stratified by time [< 48, 48-72, > 72 h]. Infarction volume and infarct growth rate [IGR] were measured on all CT scans. One hundred eighty-two patients [135 underwent DHC and 47 survived without DHC] were included in the analysis. After multivariate adjustment, factors showing the strongest independent association with DHC were patients < 55 years of age, septum pellucidum deviation, temporal lobe involvement, MCA with additional infarcts, and IGR on second CT. Of the five factors identified, different combinations of determining factors were observed in each subgroup. Both first and second IGRs were highest in the < 48, 48-< 72, and > 72 h [p < 0.001]. Patients who survived without surgery had the slowest IGRs. There was no association between time to DHC and infarct volume, although infarct volume was lower in patients who survived without DHC compared to the DHC subgroups. We identify the major risk factors associated with DHC in time-stratified subgroups of patients with MMCA. Evaluation of IGRs between the first and second scan and when possible second and third scan can help in selecting the timing of hemicraniectomy.
SSRIs and ejaculation: a double-blind, randomized, fixed-dose study with paroxetine and citalopram.
Waldinger, M D; Zwinderman, A H; Olivier, B
2001-12-01
Selective serotonin reuptake inhibitors (SSRIs) are known to induce delayed orgasm and ejaculation. However, different SSRIs may differentially delay ejaculation. A double-blind, fixed-dose study in healthy men with lifelong rapid ejaculation was performed to evaluate potential differences between clinically relevant doses of two selective serotonin reuptake inhibitors, paroxetine and citalopram, in their effects on ejaculation. Thirty men with an intravaginal ejaculation latency time (IELT) less than 1 minute were randomly assigned to receive paroxetine (20 mg/day) and citalopram (20 mg/day) for 5 weeks, after taking half the dosage in the first week. During the 1-month baseline and 6-week treatment period, IELTs were measured at home by using a stopwatch procedure. The trial was completed by 23 men. Analysis of variance revealed a between-group difference in the evolution of IELT delay over time (p = 0.0004); the IELT after paroxetine and citalopram gradually increased from 18 and 21 seconds to approximately 170 and 44 seconds, respectively. Paroxetine 20 mg/day exerted a strong delay (8.9-fold increase), whereas citalopram 20 mg/day mildly delayed ejaculation (1.8-fold increase). These results indicate that paroxetine leads to a significant delay in orgasm and ejaculation, whereas citalopram seems to have less of an effect on it.
Ash, A; Schwartz, M; Payne, S M; Restuccia, J D
1990-11-01
Medical record review is increasing in importance as the need to identify and monitor utilization and quality of care problems grow. To conserve resources, reviews are usually performed on a subset of cases. If judgment is used to identify subgroups for review, this raises the following questions: How should subgroups be determined, particularly since the locus of problems can change over time? What standard of comparison should be used in interpreting rates of problems found in subgroups? How can population problem rates be estimated from observed subgroup rates? How can the bias be avoided that arises because reviewers know that selected cases are suspected of having problems? How can changes in problem rates over time be interpreted when evaluating intervention programs? Simple random sampling, an alternative to subgroup review, overcomes the problems implied by these questions but is inefficient. The Self-Adapting Focused Review System (SAFRS), introduced and described here, provides an adaptive approach to record selection that is based upon model-weighted probability sampling. It retains the desirable inferential properties of random sampling while allowing reviews to be concentrated on cases currently thought most likely to be problematic. Model development and evaluation are illustrated using hospital data to predict inappropriate admissions.
The Effect of Very High versus Very Low Sustained Loading on the Lower Back and Knees in Middle Life
Milgrom, Yael; Constantini, Naama; Applbaum, Yaakov; Radeva-Petrova, Denitsa; Finestone, Aharon S.
2013-01-01
To evaluate the effect of the extremes of long term high and low physical activities on musculoskeletal heath in middle age, a historical cohort study was performed. The MRI knee and back findings of 25 randomly selected subjects who were inducted into the armed forces in 1983 and served at least 3 years as elite infantry soldiers were compared 25 years later, with 20 randomly selected subjects who were deferred from army service for full time religious studies at the same time. Both cohorts were from the same common genome. The two primary outcome measures were degenerative lumbar disc disease evaluated by the Pfirrmann score and degenerative knee changes evaluated by the WORMS score. At the 25-year follow up, the mean Pfirrmann score (8.6) for the L1 to S1 level of the elite infantry group was significantly higher than that of the sedentary group (6.7), (P = 0.003). There was no statistically significant difference between the WORMS knee scores between the two cohorts (P = 0.7). In spite of the much greater musculoskeletal loading history of the elite infantry cohort, only their lumbar spines but not their knees showed increased degenerative changes at middle age by MRI criteria. PMID:24093109
Random forest (RF) modeling has emerged as an important statistical learning method in ecology due to its exceptional predictive performance. However, for large and complex ecological datasets there is limited guidance on variable selection methods for RF modeling. Typically, e...
Odegård, J; Klemetsdal, G; Heringstad, B
2005-04-01
Several selection criteria for reducing incidence of mastitis were developed from a random regression sire model for test-day somatic cell score (SCS). For comparison, sire transmitting abilities were also predicted based on a cross-sectional model for lactation mean SCS. Only first-crop daughters were used in genetic evaluation of SCS, and the different selection criteria were compared based on their correlation with incidence of clinical mastitis in second-crop daughters (measured as mean daughter deviations). Selection criteria were predicted based on both complete and reduced first-crop daughter groups (261 or 65 daughters per sire, respectively). For complete daughter groups, predicted transmitting abilities at around 30 d in milk showed the best predictive ability for incidence of clinical mastitis, closely followed by average predicted transmitting abilities over the entire lactation. Both of these criteria were derived from the random regression model. These selection criteria improved accuracy of selection by approximately 2% relative to a cross-sectional model. However, for reduced daughter groups, the cross-sectional model yielded increased predictive ability compared with the selection criteria based on the random regression model. This result may be explained by the cross-sectional model being more robust, i.e., less sensitive to precision of (co)variance components estimates and effects of data structure.
Ensemble-type numerical uncertainty information from single model integrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rauser, Florian, E-mail: florian.rauser@mpimet.mpg.de; Marotzke, Jochem; Korn, Peter
2015-07-01
We suggest an algorithm that quantifies the discretization error of time-dependent physical quantities of interest (goals) for numerical models of geophysical fluid dynamics. The goal discretization error is estimated using a sum of weighted local discretization errors. The key feature of our algorithm is that these local discretization errors are interpreted as realizations of a random process. The random process is determined by the model and the flow state. From a class of local error random processes we select a suitable specific random process by integrating the model over a short time interval at different resolutions. The weights of themore » influences of the local discretization errors on the goal are modeled as goal sensitivities, which are calculated via automatic differentiation. The integration of the weighted realizations of local error random processes yields a posterior ensemble of goal approximations from a single run of the numerical model. From the posterior ensemble we derive the uncertainty information of the goal discretization error. This algorithm bypasses the requirement of detailed knowledge about the models discretization to generate numerical error estimates. The algorithm is evaluated for the spherical shallow-water equations. For two standard test cases we successfully estimate the error of regional potential energy, track its evolution, and compare it to standard ensemble techniques. The posterior ensemble shares linear-error-growth properties with ensembles of multiple model integrations when comparably perturbed. The posterior ensemble numerical error estimates are of comparable size as those of a stochastic physics ensemble.« less
ERIC Educational Resources Information Center
Kariuki, Patrick N. K.; Bush, Elizabeth Danielle
2008-01-01
The purpose of this study was to examine the effects of Total Physical Response by Storytelling and the traditional teaching method on a foreign language in a selected high school. The sample consisted of 30 students who were randomly selected and randomly assigned to experimental and control group. The experimental group was taught using Total…
Hindersin, Laura; Traulsen, Arne
2015-11-01
We analyze evolutionary dynamics on graphs, where the nodes represent individuals of a population. The links of a node describe which other individuals can be displaced by the offspring of the individual on that node. Amplifiers of selection are graphs for which the fixation probability is increased for advantageous mutants and decreased for disadvantageous mutants. A few examples of such amplifiers have been developed, but so far it is unclear how many such structures exist and how to construct them. Here, we show that almost any undirected random graph is an amplifier of selection for Birth-death updating, where an individual is selected to reproduce with probability proportional to its fitness and one of its neighbors is replaced by that offspring at random. If we instead focus on death-Birth updating, in which a random individual is removed and its neighbors compete for the empty spot, then the same ensemble of graphs consists of almost only suppressors of selection for which the fixation probability is decreased for advantageous mutants and increased for disadvantageous mutants. Thus, the impact of population structure on evolutionary dynamics is a subtle issue that will depend on seemingly minor details of the underlying evolutionary process.
Busi, Roberto; Powles, Stephen B
2016-09-01
Weeds can be a greater constraint to crop production than animal pests and pathogens. Pre-emergence herbicides are crucial in many cropping systems to control weeds that have evolved resistance to selective post-emergence herbicides. In this study we assessed the potential to evolve resistance to the pre-emergence herbicides prosulfocarb + S-metolachlor or pyroxasulfone in 50 individual field Lolium rigidum populations collected in a random survey in Western Australia prior to commercialisation of these pre-emergence herbicides. This study shows for the first time that in randomly collected L. rigidum field populations the selection with either prosulfocarb + S-metolachlor or pyroxasulfone can result in concomitant evolution of resistance to both prosulfocarb + S-metolachlor and pyroxasulfone after three generations. In the major weed L. rigidum, traits conferring resistance to new herbicides can be present before herbicide commercialisation. Proactive and multidisciplinary research (evolutionary ecology, modelling and molecular biology) is required to detect and analyse resistant populations before they can appear in the field. Several studies show that evolved cross-resistance in weeds is complex and often unpredictable. Thus, long-term management of cross-resistant weeds must be achieved through heterogeneity of selection by effective chemical, cultural and physical weed control strategies that can delay herbicide resistance evolution. © 2016 Society of Chemical Industry. © 2016 Society of Chemical Industry.
Portella, Maria J; de Diego-Adeliño, Javier; Ballesteros, Javier; Puigdemont, Dolors; Oller, Sílvia; Santos, Borja; Álvarez, Enric; Artigas, Francesc; Pérez, Víctor
2011-07-01
Since depression entails not only dramatic personal disruption but also a huge amount of medical and socioeconomic burden, slowness of antidepressant action and difficulties to attain remission are entangled issues to be solved. Given the controversial previous findings with enhancing strategies such as pindolol, we examined whether the speed of selective serotonin reuptake inhibitor (SSRI) action can be truly accelerated with optimized pindolol dosage. Additionally, we aimed at elucidating whether pindolol benefits emerge, particularly in a population with nonresistant depression. Thirty outpatients with major depressive disorder (DSM-IV criteria) recruited between December 2002 and November 2005 were randomly assigned to receive citalopram + pindolol (5 mg tid) or citalopram + placebo for 6 weeks in a double-blind randomized clinical trial. A meta-analysis of randomized controlled trials of pindolol augmentation in patients with nonresistant depression was also performed. Outcome criteria were based on the 17-item Hamilton Depression Rating Scale. For the meta-analysis, efficacy was assessed by the number of treatment responders at 2 weeks and 4-6 weeks. Clinical trial outcomes: Repeated-measures analysis of variance showed a significant group-by-time interaction (P = .01). Cumulative percentage showed a trend for sustained response (odds ratio [OR] = 2.09; 95% CI, 0.914-4.780; P = .08) and a well-defined increased likelihood of sustaining remission (OR = 5.00; 95% CI, 1.191-20.989; P = .03) in pindolol receivers. Median survival time until first response was 65% less in the pindolol group (22 days vs 30 days; P = .03). The negative binomial regression model yielded different rates of response per person-day for pindolol and placebo groups (7.6% vs 4.7%, respectively; P = .03). Meta-analysis: Outcome favored pindolol at 2 weeks' time (relative risk [RR] = 1.68; 95% CI, 1.18-2.39; P = .004) and also at 4-6 weeks' time (RR = 1.11; 95% CI, 1.02-1.20; P = .02). Present findings represent further evidence of the acceleration and enhancement of efficacy with pindolol administered together with SSRIs, displaying a quicker and more pronounced decrease of symptoms in patients with nonresistant major depressive disorder. clinicaltrials.gov Identifier: NCT00931775. © Copyright 2011 Physicians Postgraduate Press, Inc.
Signal processor for processing ultrasonic receiver signals
Fasching, George E.
1980-01-01
A signal processor is provided which uses an analog integrating circuit in conjunction with a set of digital counters controlled by a precision clock for sampling timing to provide an improved presentation of an ultrasonic transmitter/receiver signal. The signal is sampled relative to the transmitter trigger signal timing at precise times, the selected number of samples are integrated and the integrated samples are transferred and held for recording on a strip chart recorder or converted to digital form for storage. By integrating multiple samples taken at precisely the same time with respect to the trigger for the ultrasonic transmitter, random noise, which is contained in the ultrasonic receiver signal, is reduced relative to the desired useful signal.
PyGlobal: A toolkit for automated compilation of DFT-based descriptors.
Nath, Shilpa R; Kurup, Sudheer S; Joshi, Kaustubh A
2016-06-15
Density Functional Theory (DFT)-based Global reactivity descriptor calculations have emerged as powerful tools for studying the reactivity, selectivity, and stability of chemical and biological systems. A Python-based module, PyGlobal has been developed for systematically parsing a typical Gaussian outfile and extracting the relevant energies of the HOMO and LUMO. Corresponding global reactivity descriptors are further calculated and the data is saved into a spreadsheet compatible with applications like Microsoft Excel and LibreOffice. The efficiency of the module has been accounted by measuring the time interval for randomly selected Gaussian outfiles for 1000 molecules. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Genetic algorithms applied to the scheduling of the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Sponsler, Jeffrey L.
1989-01-01
A prototype system employing a genetic algorithm (GA) has been developed to support the scheduling of the Hubble Space Telescope. A non-standard knowledge structure is used and appropriate genetic operators have been created. Several different crossover styles (random point selection, evolving points, and smart point selection) are tested and the best GA is compared with a neural network (NN) based optimizer. The smart crossover operator produces the best results and the GA system is able to evolve complete schedules using it. The GA is not as time-efficient as the NN system and the NN solutions tend to be better.
Brügemann, K; Gernand, E; von Borstel, U U; König, S
2011-08-01
Data used in the present study included 1,095,980 first-lactation test-day records for protein yield of 154,880 Holstein cows housed on 196 large-scale dairy farms in Germany. Data were recorded between 2002 and 2009 and merged with meteorological data from public weather stations. The maximum distance between each farm and its corresponding weather station was 50 km. Hourly temperature-humidity indexes (THI) were calculated using the mean of hourly measurements of dry bulb temperature and relative humidity. On the phenotypic scale, an increase in THI was generally associated with a decrease in daily protein yield. For genetic analyses, a random regression model was applied using time-dependent (d in milk, DIM) and THI-dependent covariates. Additive genetic and permanent environmental effects were fitted with this random regression model and Legendre polynomials of order 3 for DIM and THI. In addition, the fixed curve was modeled with Legendre polynomials of order 3. Heterogeneous residuals were fitted by dividing DIM into 5 classes, and by dividing THI into 4 classes, resulting in 20 different classes. Additive genetic variances for daily protein yield decreased with increasing degrees of heat stress and were lowest at the beginning of lactation and at extreme THI. Due to higher additive genetic variances, slightly higher permanent environment variances, and similar residual variances, heritabilities were highest for low THI in combination with DIM at the end of lactation. Genetic correlations among individual values for THI were generally >0.90. These trends from the complex random regression model were verified by applying relatively simple bivariate animal models for protein yield measured in 2 THI environments; that is, defining a THI value of 60 as a threshold. These high correlations indicate the absence of any substantial genotype × environment interaction for protein yield. However, heritabilities and additive genetic variances from the random regression model tended to be slightly higher in the THI range corresponding to cows' comfort zone. Selecting such superior environments for progeny testing can contribute to an accurate genetic differentiation among selection candidates. Copyright © 2011 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.
The role of color and attention-to-color in mirror-symmetry perception.
Gheorghiu, Elena; Kingdom, Frederick A A; Remkes, Aaron; Li, Hyung-Chul O; Rainville, Stéphane
2016-07-11
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) 'segregated' - symmetric blobs were of one color, random blobs of the other color(s); (2) 'random-segregated' - as above but with the symmetric color randomly selected on each trial; (3) 'non-segregated' - symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) 'anti-symmetric' - symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective.
The role of color and attention-to-color in mirror-symmetry perception
Gheorghiu, Elena; Kingdom, Frederick A. A.; Remkes, Aaron; Li, Hyung-Chul O.; Rainville, Stéphane
2016-01-01
The role of color in the visual perception of mirror-symmetry is controversial. Some reports support the existence of color-selective mirror-symmetry channels, others that mirror-symmetry perception is merely sensitive to color-correlations across the symmetry axis. Here we test between the two ideas. Stimuli consisted of colored Gaussian-blobs arranged either mirror-symmetrically or quasi-randomly. We used four arrangements: (1) ‘segregated’ – symmetric blobs were of one color, random blobs of the other color(s); (2) ‘random-segregated’ – as above but with the symmetric color randomly selected on each trial; (3) ‘non-segregated’ – symmetric blobs were of all colors in equal proportions, as were the random blobs; (4) ‘anti-symmetric’ – symmetric blobs were of opposite-color across the symmetry axis. We found: (a) near-chance levels for the anti-symmetric condition, suggesting that symmetry perception is sensitive to color-correlations across the symmetry axis; (b) similar performance for random-segregated and non-segregated conditions, giving no support to the idea that mirror-symmetry is color selective; (c) highest performance for the color-segregated condition, but only when the observer knew beforehand the symmetry color, suggesting that symmetry detection benefits from color-based attention. We conclude that mirror-symmetry detection mechanisms, while sensitive to color-correlations across the symmetry axis and subject to the benefits of attention-to-color, are not color selective. PMID:27404804
Ensemble Feature Learning of Genomic Data Using Support Vector Machine
Anaissi, Ali; Goyal, Madhu; Catchpoole, Daniel R.; Braytee, Ali; Kennedy, Paul J.
2016-01-01
The identification of a subset of genes having the ability to capture the necessary information to distinguish classes of patients is crucial in bioinformatics applications. Ensemble and bagging methods have been shown to work effectively in the process of gene selection and classification. Testament to that is random forest which combines random decision trees with bagging to improve overall feature selection and classification accuracy. Surprisingly, the adoption of these methods in support vector machines has only recently received attention but mostly on classification not gene selection. This paper introduces an ensemble SVM-Recursive Feature Elimination (ESVM-RFE) for gene selection that follows the concepts of ensemble and bagging used in random forest but adopts the backward elimination strategy which is the rationale of RFE algorithm. The rationale behind this is, building ensemble SVM models using randomly drawn bootstrap samples from the training set, will produce different feature rankings which will be subsequently aggregated as one feature ranking. As a result, the decision for elimination of features is based upon the ranking of multiple SVM models instead of choosing one particular model. Moreover, this approach will address the problem of imbalanced datasets by constructing a nearly balanced bootstrap sample. Our experiments show that ESVM-RFE for gene selection substantially increased the classification performance on five microarray datasets compared to state-of-the-art methods. Experiments on the childhood leukaemia dataset show that an average 9% better accuracy is achieved by ESVM-RFE over SVM-RFE, and 5% over random forest based approach. The selected genes by the ESVM-RFE algorithm were further explored with Singular Value Decomposition (SVD) which reveals significant clusters with the selected data. PMID:27304923
Randomization Methods in Emergency Setting Trials: A Descriptive Review
ERIC Educational Resources Information Center
Corbett, Mark Stephen; Moe-Byrne, Thirimon; Oddie, Sam; McGuire, William
2016-01-01
Background: Quasi-randomization might expedite recruitment into trials in emergency care settings but may also introduce selection bias. Methods: We searched the Cochrane Library and other databases for systematic reviews of interventions in emergency medicine or urgent care settings. We assessed selection bias (baseline imbalances) in prognostic…
Middle Level Practices in European International and Department of Defense Schools.
ERIC Educational Resources Information Center
Waggoner, V. Christine; McEwin, C. Kenneth
1993-01-01
Discusses results of a 1989-90 survey of 70 randomly selected international schools and 70 randomly selected Department of Defense Schools in Europe. Programs and practices surveyed included enrollments, grade organization, curriculum and instructional plans, core subjects, grouping patterns, exploratory courses, advisory programs, and scheduling.…
Random forest (RF) is popular in ecological and environmental modeling, in part, because of its insensitivity to correlated predictors and resistance to overfitting. Although variable selection has been proposed to improve both performance and interpretation of RF models, it is u...
Potential of houseflies to contaminate ready-to-eat food with antibiotic-resistant enterococci.
Macovei, Lilia; Miles, Brett; Zurek, Ludek
2008-02-01
It was shown previously that houseflies in fast-food restaurants commonly carry antibiotic-resistant and potentially virulent enterococci. In this study, the potential of field-collected houseflies to contaminate ready-to-eat (RTE) food with enterococci was assessed by laboratory bioassays. Houseflies were collected with a sweep net in a cattle feedlot and exposed in groups of 5, 10, 20, and 40 to a beef patty (from an RTE hamburger) for 0.5, 1.0, 3.0, and 24 h. The exposure of RTE food to flies resulted in 100% contamination with enterococci in all bioassays, regardless of the number of houseflies and the length of exposure time. In addition, with the increasing number of houseflies as well as with the increasing time exposure, the concentration of enterococci in RTE food increased. Even a short time exposure (0.5 h) resulted in food contamination, ranging from 3.1 x 10(3) CFU/g (5 houseflies) to 8.4 x 10(4) CFU/g (40 houseflies). The analysis of 23 randomly selected enterococcal isolates from RTE food after the fly exposure revealed a single species, Enterococcus faecalis. In contrast, four Enterococcus species, including E. faecalis (57.1%), E. gallinarum (19.1%), E. hirae (14.3%), and E. faecium (9.5%), represented 21 randomly selected and identified isolates from houseflies. Phenotypic screening showed that E. faecalis isolates from RTE food were resistant to ciprofloxacin (17.4%), tetracycline (13.0%), erythromycin (13.0%), and chloramphenicol (4.3%). This study demonstrates a great potential of houseflies from a cattle feedlot to contaminate RTE food with enterococci in a short time.
Experimental rugged fitness landscape in protein sequence space.
Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya
2006-12-20
The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12-130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7x10(4)-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18-24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region.
Experimental Rugged Fitness Landscape in Protein Sequence Space
Hayashi, Yuuki; Aita, Takuyo; Toyota, Hitoshi; Husimi, Yuzuru; Urabe, Itaru; Yomo, Tetsuya
2006-01-01
The fitness landscape in sequence space determines the process of biomolecular evolution. To plot the fitness landscape of protein function, we carried out in vitro molecular evolution beginning with a defective fd phage carrying a random polypeptide of 139 amino acids in place of the g3p minor coat protein D2 domain, which is essential for phage infection. After 20 cycles of random substitution at sites 12–130 of the initial random polypeptide and selection for infectivity, the selected phage showed a 1.7×104-fold increase in infectivity, defined as the number of infected cells per ml of phage suspension. Fitness was defined as the logarithm of infectivity, and we analyzed (1) the dependence of stationary fitness on library size, which increased gradually, and (2) the time course of changes in fitness in transitional phases, based on an original theory regarding the evolutionary dynamics in Kauffman's n-k fitness landscape model. In the landscape model, single mutations at single sites among n sites affect the contribution of k other sites to fitness. Based on the results of these analyses, k was estimated to be 18–24. According to the estimated parameters, the landscape was plotted as a smooth surface up to a relative fitness of 0.4 of the global peak, whereas the landscape had a highly rugged surface with many local peaks above this relative fitness value. Based on the landscapes of these two different surfaces, it appears possible for adaptive walks with only random substitutions to climb with relative ease up to the middle region of the fitness landscape from any primordial or random sequence, whereas an enormous range of sequence diversity is required to climb further up the rugged surface above the middle region. PMID:17183728
Ha, Tam Cam; Yong, Sook Kwin; Yeoh, Kheng-Wei; Kamberakis, Kay; Yeo, Richard Ming Chert; Koh, Gerald Choon-Huat
2014-11-01
The purpose of the study was to investigate whether fecal occult blood test (FOBT) home-delivery and individual education or combined with family education increases FOBT uptake rates in Singapore. This is a randomized controlled intervention study of Singaporean residents aged 50 years and above, conducted in May 2012 till May 2013. Eligible individuals in randomly selected households were screened, and one member was randomly selected and allocated to one of the four arms: Group A (individual and family education, FOBT kits provided), Group B (individual education only, FOBT kits provided), Group C (no education, FOBT kits provided) and Group D (no education or FOBT kits provided). Overall response rate was 74.7 %. The FOBT return rates for groups A, B, C and D were 24.5 % [CI 16.2-34.4 %], 25.3 % [CI 16.4-36.0 %], 10.7 % [CI 4.7-19.9 %] and 2.2 % [CI 0.3-7.7 %], respectively. Respondents who were provided education and home-delivered FOBT kits were 15 times more likely to return FOBT kits [Group A: OR 15.0 (3.4-66.2); Group B: OR 15.5 (3.5-68.8)] and those provided with home-delivered FOBT without education were five times more likely to return FOBT kits [Group C: OR 5.8 (1.2-28.3)] than those without education and FOBT kits (Group D). There was no significant difference in return of FOBT kits whether education was provided to subject with or without a family member. Home delivery of FOBT kits increased FOBT return rates and individual education combined with home-delivered FOBT increased FOBT return rates even further. However, additional combination with family education did not increase FOBT rates further.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex.
Lindsay, Grace W; Rigotti, Mattia; Warden, Melissa R; Miller, Earl K; Fusi, Stefano
2017-11-08
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear "mixed" selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli-and in particular, to combinations of stimuli ("mixed selectivity")-is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. Copyright © 2017 the authors 0270-6474/17/3711021-16$15.00/0.
Random deposition of particles of different sizes.
Forgerini, F L; Figueiredo, W
2009-04-01
We study the surface growth generated by the random deposition of particles of different sizes. A model is proposed where the particles are aggregated on an initially flat surface, giving rise to a rough interface and a porous bulk. By using Monte Carlo simulations, a surface has grown by adding particles of different sizes, as well as identical particles on the substrate in (1+1) dimensions. In the case of deposition of particles of different sizes, they are selected from a Poisson distribution, where the particle sizes may vary by 1 order of magnitude. For the deposition of identical particles, only particles which are larger than one lattice parameter of the substrate are considered. We calculate the usual scaling exponents: the roughness, growth, and dynamic exponents alpha, beta, and z, respectively, as well as, the porosity in the bulk, determining the porosity as a function of the particle size. The results of our simulations show that the roughness evolves in time following three different behaviors. The roughness in the initial times behaves as in the random deposition model. At intermediate times, the surface roughness grows slowly and finally, at long times, it enters into the saturation regime. The bulk formed by depositing large particles reveals a porosity that increases very fast at the initial times and also reaches a saturation value. Excepting the case where particles have the size of one lattice spacing, we always find that the surface roughness and porosity reach limiting values at long times. Surprisingly, we find that the scaling exponents are the same as those predicted by the Villain-Lai-Das Sarma equation.
NASA Astrophysics Data System (ADS)
Sutrisno; Widowati; Solikhin
2016-06-01
In this paper, we propose a mathematical model in stochastic dynamic optimization form to determine the optimal strategy for an integrated single product inventory control problem and supplier selection problem where the demand and purchasing cost parameters are random. For each time period, by using the proposed model, we decide the optimal supplier and calculate the optimal product volume purchased from the optimal supplier so that the inventory level will be located at some point as close as possible to the reference point with minimal cost. We use stochastic dynamic programming to solve this problem and give several numerical experiments to evaluate the model. From the results, for each time period, the proposed model was generated the optimal supplier and the inventory level was tracked the reference point well.
Ling, Jia-Yan; Shen, Lin; Liu, Qing; Wang, Ling-Yun
2013-12-01
To verify the clinical efficacy on chronic fatigue syndrome of qi deficiency syndrome treated with acupuncture at selective time and explore the effect mechanism. Eighty patients were randomized into a selective-time-acupuncture group and an acupuncture group, 40 cases in each one. Qihai (CV 6), Guanyuan (CV 4), Hegu (LI 4), Taichong (LR 3), Sanyinjiao (SP 6) and Zusanli (ST 36) were selected in the two groups. In the selective-time-acupuncture group, acupuncture was used at 9:00am to 11:00am. In the acupuncture group, acupuncture was used at any time except in the range from 9:00am to 11:00am. No any manipulation was applied after the arrival of needling sensation. The treatment was given once every day, 10 day treatment made one session and two sessions of treatment were required. The fatigue scale was adopted to evaluate the efficacy before and after treatment in the patients of the two groups. The ratios among CD3+, CD4+ and CD8+ T cells in the peripheral blood were detected before ad b a after treatment. In the acupuncture group, the total score of fatigue and the score of physical fatigue were reduced after treatment as compared with those before treatment (all P<0.05). In the selective-time -acupuncture group, the total score of fatigue, the s core of physical fatigue and the score of mental fatigue after treatment were reduced obviously as compared with those hefore treatment (all P<0. 01). The improvements in the scores of the selective-time-acupuncture group were superior to the acupuncture group (all P<0. 05). The ratio of CD3+ and CD8+ T cells was increased obviously after treatment in the two groups (all P<0. 05) and the ratio of CD4+ and CD8+ T cells was reduced obviously in the selective-time-acupuncture group (P<0. 05), which was better than that in the acupuncture group (all P<0.05). The total effective rate was 95.0% (38/40) in the selective-time-acupuncture group, which was better than 80.0% (32/40) in the acupuncture group (P<0.05). The acupuncture therapy at selective time is effective in the treatment of chronic fatigue syndrome of qi deficiency syndrome, which is especially better at relieving mental fatigue. The effect of this therapy is achieved probably by improving the immune function via the regulation of the ratios among CD3+, CD4+ and CD8+ T cells.
Sandrick, Janice; Tracy, Doreen; Eliasson, Arn; Roth, Ashley; Bartel, Jeffrey; Simko, Melanie; Bowman, Tracy; Harouse-Bell, Karen; Kashani, Mariam; Vernalis, Marina
2017-05-17
The college experience is often the first time when young adults live independently and make their own lifestyle choices. These choices affect dietary behaviors, exercise habits, techniques to deal with stress, and decisions on sleep time, all of which direct the trajectory of future health. There is a need for effective strategies that will encourage healthy lifestyle choices in young adults attending college. This preliminary randomized controlled trial tested the effect of coaching and text messages (short message service, SMS) on self-selected health behaviors in the domains of diet, exercise, stress, and sleep. A second analysis measured the ripple effect of the intervention on health behaviors not specifically selected as a goal by participants. Full-time students aged 18-30 years were recruited by word of mouth and campuswide advertisements (flyers, posters, mailings, university website) at a small university in western Pennsylvania from January to May 2015. Exclusions included pregnancy, eating disorders, chronic medical diagnoses, and prescription medications other than birth control. Of 60 participants, 30 were randomized to receive a single face-to-face meeting with a health coach to review results of behavioral questionnaires and to set a health behavior goal for the 8-week study period. The face-to-face meeting was followed by SMS text messages designed to encourage achievement of the behavioral goal. A total of 30 control subjects underwent the same health and behavioral assessments at intake and program end but did not receive coaching or SMS text messages. The texting app showed that 87.31% (2187/2505) of messages were viewed by intervention participants. Furthermore, 28 of the 30 intervention participants and all 30 control participants provided outcome data. Among intervention participants, 22 of 30 (73%) showed improvement in health behavior goal attainment, with the whole group (n=30) showing a mean improvement of 88% (95% CI 39-136). Mean improvement in any behavioral domains was not seen in the control group. Intervention participants also increased their exercise significantly compared with controls, regardless of their self-selected goal category. The increased exercise was paralleled by significantly lower fasting glucose levels. The health coaching session plus tailored SMS text messages improved self-selected health behaviors with a modest ripple effect to include unselected health behaviors. Clinicaltrials.gov NCT02476604; https://clinicaltrials.gov/ct2/show/NCT02476604 (Archived by WebCite at http://www.webcitation.org/6qAAryS5t). ©Janice Sandrick, Doreen Tracy, Arn Eliasson, Ashley Roth, Jeffrey Bartel, Melanie Simko, Tracy Bowman, Karen Harouse-Bell, Mariam Kashani, Marina Vernalis. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 17.05.2017.
Delayed School Start Times and Adolescent Sleep: A Systematic Review of the Experimental Evidence
Minges, Karl E.; Redeker, Nancy S.
2016-01-01
Summary Many schools have instituted later morning start times to improve sleep, academic, and other outcomes in response to the mismatch between youth circadian rhythms and early morning start times. However, there has been no systematic synthesis of the evidence on the effects of this practice. To examine the impact of delayed school start time on students’ sleep, health, and academic outcomes, electronic databases were systematically searched and data were extracted using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines. Six studies satisfied selection criteria and used pre-post, no control (n=3), randomized controlled trial (n=2), and quasi-experimental (n=1) designs. School start times were delayed 25 to 60 minutes, and correspondingly, total sleep time increased from 25 to 77 minutes per weeknight. Some studies revealed reduced daytime sleepiness, depression, caffeine use, tardiness to class, and trouble staying awake. Overall, the evidence supports recent non-experimental study findings and calls for policy that advocates for delayed school start time to improve sleep. This presents a potential long-term solution to chronic sleep restriction during adolescence. However, there is a need for rigorous randomized study designs and reporting of consistent outcomes, including objective sleep measures and consistent measures of health and academic performance. PMID:26545246
The Mechanism for Processing Random-Dot Motion at Various Speeds in Early Visual Cortices
An, Xu; Gong, Hongliang; McLoughlin, Niall; Yang, Yupeng; Wang, Wei
2014-01-01
All moving objects generate sequential retinotopic activations representing a series of discrete locations in space and time (motion trajectory). How direction-selective neurons in mammalian early visual cortices process motion trajectory remains to be clarified. Using single-cell recording and optical imaging of intrinsic signals along with mathematical simulation, we studied response properties of cat visual areas 17 and 18 to random dots moving at various speeds. We found that, the motion trajectory at low speed was encoded primarily as a direction signal by groups of neurons preferring that motion direction. Above certain transition speeds, the motion trajectory is perceived as a spatial orientation representing the motion axis of the moving dots. In both areas studied, above these speeds, other groups of direction-selective neurons with perpendicular direction preferences were activated to encode the motion trajectory as motion-axis information. This applied to both simple and complex neurons. The average transition speed for switching between encoding motion direction and axis was about 31°/s in area 18 and 15°/s in area 17. A spatio-temporal energy model predicted the transition speeds accurately in both areas, but not the direction-selective indexes to random-dot stimuli in area 18. In addition, above transition speeds, the change of direction preferences of population responses recorded by optical imaging can be revealed using vector maximum but not vector summation method. Together, this combined processing of motion direction and axis by neurons with orthogonal direction preferences associated with speed may serve as a common principle of early visual motion processing. PMID:24682033
May, Larissa S.; Rothman, Richard E.; Miller, Loren G.; Brooks, Gillian; Zocchi, Mark; Zatorski, Catherine; Dugas, Andrea F.; Ware, Chelsea E.; Jordan, Jeanne A.
2017-01-01
OBJECTIVE To determine whether real-time availability of rapid molecular results of Staphylococcus aureus would impact emergency department clinician antimicrobial selection for adults with cutaneous abscesses. DESIGN We performed a prospective, randomized controlled trial comparing a rapid molecular test with standard of care culture-based testing. Follow-up telephone calls were made at between 2 and 7 days, 1 month, and 3 months after discharge. SETTING Two urban, academic emergency departments. PATIENTS Patients at least 18 years old presenting with a chief complaint of abscess, cellulitis, or insect bite and receiving incision and drainage were eligible. Seven hundred seventy-eight people were assessed for eligibility and 252 met eligibility criteria. METHODS Clinician antibiotic selection and clinical outcomes were evaluated. An ad hoc outcome of test performance was performed. RESULTS We enrolled 252 patients and 126 were randomized to receive the rapid test. Methicillin-susceptible S. aureus–positive patients receiving rapid test results were prescribed beta-lactams more often than controls (absolute difference, 14.5% [95% CI, 1.1%–30.1%]) whereas methicillin-resistant S. aureus–positive patients receiving rapid test results were more often prescribed anti–methicillin-resistant S. aureus antibiotics (absolute difference, 21.5% [95% CI, 10.1%–33.0%]). There were no significant differences between the 2 groups in 1-week or 3-month clinical outcomes. CONCLUSION Availability of rapid molecular test results after incision and drainage was associated with more-targeted antibiotic selection. TRIAL REGISTRATION clinicaltrials.gov Identifier: NCT01523899 PMID:26306996
Adaptive Electronic Camouflage Using Texture Synthesis
2012-04-01
algorithm begins by computing the GLCMs, GIN and GOUT , of the input image (e.g., image of local environment) and output image (randomly generated...respectively. The algorithm randomly selects a pixel from the output image and cycles its gray-level through all values. For each value, GOUT is updated...The value of the selected pixel is permanently changed to the gray-level value that minimizes the error between GIN and GOUT . Without selecting a
Brambila, Carlos; Lopez, Felipe; Garcia-Colindres, Julio; Donis, Marco Vinicio
2005-04-01
To develop and test a distance-learning programme to improve the quality and efficiency of family planning services in Guatemala. The setting was rural family planning services in Guatemala. The study design was quasi-experimental with one intervention and one control group and with pre- and post-intervention measures. Two staff members from each of 20 randomly selected health districts were trained as leaders of the training programme. In turn, the 40 trainers trained a total of 240 service providers, under the supervision of four health area facilitators. The results were compared with 20 randomly selected control health districts. The intervention was a distance-learning programme including 40 in-class hours followed by 120 inservice practice hours spread over a 4-month period. Distinctively, the programme used a cascade approach to training, intensive supervision, and close monitoring and evaluation. Patient flow analysis was used to determine number of contacts, waiting times, and the interaction time between service providers and clients. Consultation observations were used to assess the quality and completeness of reproductive health information and services received by clients. The intervention showed a positive impact on reducing the number of contacts before the consultation and client waiting times. More complete services and better quality services were provided at intervention clinics. Some, but not all, of the study objectives were attained. The long-term impact of the intervention is as yet unknown. Distance-learning programmes are an effective methodology for training health professionals in rural areas.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Hebbian Learning in a Random Network Captures Selectivity Properties of the Prefrontal Cortex
Lindsay, Grace W.
2017-01-01
Complex cognitive behaviors, such as context-switching and rule-following, are thought to be supported by the prefrontal cortex (PFC). Neural activity in the PFC must thus be specialized to specific tasks while retaining flexibility. Nonlinear “mixed” selectivity is an important neurophysiological trait for enabling complex and context-dependent behaviors. Here we investigate (1) the extent to which the PFC exhibits computationally relevant properties, such as mixed selectivity, and (2) how such properties could arise via circuit mechanisms. We show that PFC cells recorded from male and female rhesus macaques during a complex task show a moderate level of specialization and structure that is not replicated by a model wherein cells receive random feedforward inputs. While random connectivity can be effective at generating mixed selectivity, the data show significantly more mixed selectivity than predicted by a model with otherwise matched parameters. A simple Hebbian learning rule applied to the random connectivity, however, increases mixed selectivity and enables the model to match the data more accurately. To explain how learning achieves this, we provide analysis along with a clear geometric interpretation of the impact of learning on selectivity. After learning, the model also matches the data on measures of noise, response density, clustering, and the distribution of selectivities. Of two styles of Hebbian learning tested, the simpler and more biologically plausible option better matches the data. These modeling results provide clues about how neural properties important for cognition can arise in a circuit and make clear experimental predictions regarding how various measures of selectivity would evolve during animal training. SIGNIFICANCE STATEMENT The prefrontal cortex is a brain region believed to support the ability of animals to engage in complex behavior. How neurons in this area respond to stimuli—and in particular, to combinations of stimuli (“mixed selectivity”)—is a topic of interest. Even though models with random feedforward connectivity are capable of creating computationally relevant mixed selectivity, such a model does not match the levels of mixed selectivity seen in the data analyzed in this study. Adding simple Hebbian learning to the model increases mixed selectivity to the correct level and makes the model match the data on several other relevant measures. This study thus offers predictions on how mixed selectivity and other properties evolve with training. PMID:28986463
Buccellato, C A; Stika, C S; Frederiksen, M C
2000-05-01
Our purpose was to compare the efficacy and safety of misoprostol and extra-amniotic sodium chloride infusion with oxytocin for induction of labor. This randomized trial compared two methods of labor induction in women requiring cervical ripening. One hundred twenty-three women undergoing labor induction with a Bishop score < or =5 were randomly selected to receive either misoprostol, 50 microg intravaginally every 4 hours, or extra-amniotic sodium chloride infusion. The primary outcome variable was the time interval from induction to vaginal delivery. Sixty-one women received extra-amniotic sodium chloride infusion and 62 women received misoprostol. The mean time interval from the start of induction to vaginal delivery was 15.0 +/- 5.0 hours and 16.5 +/- 7.2 hours for the extra-amniotic infusion and misoprostol groups, respectively (P, not significant). The cesarean delivery rate was not significantly different between the 2 groups (32.8% for the extra-amniotic infusion group; 19.4% for the misoprostol group). Maternal and neonatal outcomes were similar between the 2 groups. Both methods of induction are equally efficacious and result in similar maternal and neonatal outcomes.
Sariyar, Murat; Hoffmann, Isabell; Binder, Harald
2014-02-26
Molecular data, e.g. arising from microarray technology, is often used for predicting survival probabilities of patients. For multivariate risk prediction models on such high-dimensional data, there are established techniques that combine parameter estimation and variable selection. One big challenge is to incorporate interactions into such prediction models. In this feasibility study, we present building blocks for evaluating and incorporating interactions terms in high-dimensional time-to-event settings, especially for settings in which it is computationally too expensive to check all possible interactions. We use a boosting technique for estimation of effects and the following building blocks for pre-selecting interactions: (1) resampling, (2) random forests and (3) orthogonalization as a data pre-processing step. In a simulation study, the strategy that uses all building blocks is able to detect true main effects and interactions with high sensitivity in different kinds of scenarios. The main challenge are interactions composed of variables that do not represent main effects, but our findings are also promising in this regard. Results on real world data illustrate that effect sizes of interactions frequently may not be large enough to improve prediction performance, even though the interactions are potentially of biological relevance. Screening interactions through random forests is feasible and useful, when one is interested in finding relevant two-way interactions. The other building blocks also contribute considerably to an enhanced pre-selection of interactions. We determined the limits of interaction detection in terms of necessary effect sizes. Our study emphasizes the importance of making full use of existing methods in addition to establishing new ones.
Assessing the quality of reproductive health services in Egypt via exit interviews.
Zaky, Hassan H M; Khattab, Hind A S; Galal, Dina
2007-05-01
This study assesses the quality of reproductive health services using client satisfaction exit interviews among three groups of primary health care units run by the Ministry of Health and Population of Egypt. Each group applied a different model of intervention. The Ministry will use the results in assessing its reproductive health component in the health sector reform program, and benefits from the strengths of other models of intervention. The sample was selected in two stages. First, a stratified random sampling procedure was used to select the health units. Then the sample of female clients in each health unit was selected using the systematic random approach, whereby one in every two women visiting the unit was approached. All women in the sample coming for reproductive health services were included in the analysis. The results showed that reproductive health beneficiaries at the units implementing the new health sector reform program were more satisfied with the quality of services. Still there were various areas where clients showed significant dissatisfaction, such as waiting time, interior furnishings, cleanliness of the units and consultation time. The study showed that the staff of these units did not provide a conductive social environment as other interventions did. A significant proportion of women expressed their intention to go to private physicians owing to their flexible working hours and variety specializations. Beneficiaries were generally more satisfied with the quality of health services after attending the reformed units than the other types of units, but the generalization did not fully apply. Areas of weakness are identified.
NASA Astrophysics Data System (ADS)
Azimzade, Youness; Mashaghi, Alireza
2017-12-01
Efficient search acts as a strong selective force in biological systems ranging from cellular populations to predator-prey systems. The search processes commonly involve finding a stationary or mobile target within a heterogeneously structured environment where obstacles limit migration. An open generic question is whether random or directionally biased motions or a combination of both provide an optimal search efficiency and how that depends on the motility and density of targets and obstacles. To address this question, we develop a simple model that involves a random walker searching for its targets in a heterogeneous medium of bond percolation square lattice and used mean first passage time (〈T 〉 ) as an indication of average search time. Our analysis reveals a dual effect of directional bias on the minimum value of 〈T 〉 . For a homogeneous medium, directionality always decreases 〈T 〉 and a pure directional migration (a ballistic motion) serves as the optimized strategy, while for a heterogeneous environment, we find that the optimized strategy involves a combination of directed and random migrations. The relative contribution of these modes is determined by the density of obstacles and motility of targets. Existence of randomness and motility of targets add to the efficiency of search. Our study reveals generic and simple rules that govern search efficiency. Our findings might find application in a number of areas including immunology, cell biology, ecology, and robotics.
Mate choice theory and the mode of selection in sexual populations.
Carson, Hampton L
2003-05-27
Indirect new data imply that mate and/or gamete choice are major selective forces driving genetic change in sexual populations. The system dictates nonrandom mating, an evolutionary process requiring both revised genetic theory and new data on heritability of characters underlying Darwinian fitness. Successfully reproducing individuals represent rare selections from among vigorous, competing survivors of preadult natural selection. Nonrandom mating has correlated demographic effects: reduced effective population size, inbreeding, low gene flow, and emphasis on deme structure. Characters involved in choice behavior at reproduction appear based on quantitative trait loci. This variability serves selection for fitness within the population, having only an incidental relationship to the origin of genetically based reproductive isolation between populations. The claim that extensive hybridization experiments with Drosophila indicate that selection favors a gradual progression of "isolating mechanisms" is flawed, because intra-group random mating is assumed. Over deep time, local sexual populations are strong, independent genetic systems that use rich fields of variable polygenic components of fitness. The sexual reproduction system thus particularizes, in small subspecific populations, the genetic basis of the grand adaptive sweep of selective evolutionary change, much as Darwin proposed.
Zeng, Xueqiang; Luo, Gang
2017-12-01
Machine learning is broadly used for clinical data analysis. Before training a model, a machine learning algorithm must be selected. Also, the values of one or more model parameters termed hyper-parameters must be set. Selecting algorithms and hyper-parameter values requires advanced machine learning knowledge and many labor-intensive manual iterations. To lower the bar to machine learning, miscellaneous automatic selection methods for algorithms and/or hyper-parameter values have been proposed. Existing automatic selection methods are inefficient on large data sets. This poses a challenge for using machine learning in the clinical big data era. To address the challenge, this paper presents progressive sampling-based Bayesian optimization, an efficient and automatic selection method for both algorithms and hyper-parameter values. We report an implementation of the method. We show that compared to a state of the art automatic selection method, our method can significantly reduce search time, classification error rate, and standard deviation of error rate due to randomization. This is major progress towards enabling fast turnaround in identifying high-quality solutions required by many machine learning-based clinical data analysis tasks.
The Effect of CAI on Reading Achievement.
ERIC Educational Resources Information Center
Hardman, Regina
A study determined whether computer assisted instruction (CAI) had an effect on students' reading achievement. Subjects were 21 randomly selected fourth-grade students at D. S. Wentworth Elementary School on the south side of Chicago in a low-income neighborhood who received a year's exposure to a CAI program, and 21 randomly selected students at…
78 FR 57033 - United States Standards for Condition of Food Containers
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-17
... containers during production. Stationary lot sampling is the process of randomly selecting sample units from.... * * * * * Stationary lot sampling. The process of randomly selecting sample units from a lot whose production has been... less than \\1/16\\-inch Stringy seal (excessive plastic threads showing at edge of seal 222 area...
Access to Higher Education by the Luck of the Draw
ERIC Educational Resources Information Center
Stone, Peter
2013-01-01
Random selection is a fair way to break ties between applicants of equal merit seeking admission to institutions of higher education (with "merit" defined here in terms of the intrinsic contribution higher education would make to the applicant's life). Opponents of random selection commonly argue that differences in strength between…
ERIC Educational Resources Information Center
Beretvas, S. Natasha; Murphy, Daniel L.
2013-01-01
The authors assessed correct model identification rates of Akaike's information criterion (AIC), corrected criterion (AICC), consistent AIC (CAIC), Hannon and Quinn's information criterion (HQIC), and Bayesian information criterion (BIC) for selecting among cross-classified random effects models. Performance of default values for the 5…
1977 Survey of the American Professoriate. Technical Report.
ERIC Educational Resources Information Center
Ladd, Everett Carll, Jr.; And Others
The development and data validation of the 1977 Ladd-Lipset national survey of the American professoriate are described. The respondents were selected from a random sample of colleges and universities and from a random sample of individual faculty members from the universities. The 158 institutions in the 1977 survey were selected from 2,406…
Site Selection in Experiments: A Follow-Up Evaluation of Site Recruitment in Two Scale-Up Studies
ERIC Educational Resources Information Center
Tipton, Elizabeth; Fellers, Lauren; Caverly, Sarah; Vaden-Kiernan, Michael; Borman, Geoffrey; Sullivan, Kate; Ruiz de Castillo, Veronica
2015-01-01
Randomized experiments are commonly used to evaluate if particular interventions improve student achievement. While these experiments can establish that a treatment actually "causes" changes, typically the participants are not randomly selected from a well-defined population and therefore the results do not readily generalize. Three…
Self-Injurious Behavior: An Animal Model of an Autism Endophenotype
2012-01-01
time there was a visible release of the pasta (not a drop) or a reformation of the digits holding the pasta via motor patterns of flexion/extension...review of 18 pasta -trials, nine trials randomly selected from each experimental group. Behaviors included on the code sheet were number of drops...failure to contact reaches, angling with head tilt, abnormal posture, use of a unilateral paw technique, and twirling of the pasta . Specific descriptions
2013-03-01
framework of orientation distribution functions and crack-induced texture o Quantify effects of temperature on damage behavior and damage monitoring...measurement model was obtained from hidden Markov modeling (HMM) of joint time-frequency (TF) features extracted from the PZT sensor signals using the...considered PZT sensor signals recorded from a bolted aluminum plate. About only 20% of the samples of a signal were first randomly selected as
2007-06-01
images,” IEEE Trans. Pattern Analysis Machine Intelligence, vol. 13, no. 2, pp. 99–113, 1991. [15] C. Bouman and M. Shapiro, “A multiscale random...including the time for reviewing instructions, searching existing data sources, gathering and maintaining the data needed, and completing and reviewing...this project was on developing new statistical algorithms for analysis of electromagnetic induction (EMI) and magnetometer data measured at actual
ERIC Educational Resources Information Center
LUECKE, FRITZ; SPROESSER, GERRY
A RANDOMLY SELECTED SAMPLE OF 163 NINTH AND TENTH GRADE STUDENTS WAS ASKED, IN A SERIES OF QUESTIONS, TO EXPRESS THEIR ATTITUDE TOWARD THE SCHOOL LIBRARY. THE RESPONSES TO EACH STATEMENT WERE TALLIED AND PRESENTED AS A NUMBER AND PERCENTAGE. IN GENERAL IT WAS FOUND THAT THE MAJORITY OF THE STUDENTS SPEND SOME OF THEIR INDEPENDENT STUDY TIME IN THE…
Comparison of closed and open methods of pneumoperitonium in laparoscopic cholecystectomy.
Akbar, Mohammad; Khan, Ishtiaq Ali; Naveed, Danish; Khattak, Irfanuddin; Zafar, Arshad; Wazir, Muhammad Salim; Khan, Asif Nawaz; Zia-ur-Rehman
2008-01-01
Pneumoperitonium is the first step in laparoscopic surgery including cholecystectomy. Two commonly used methods to create pneumoperitonium are closed and open technique. Both have advantages and disadvantages. The current study was designed to compare these two techniques in terms of safety and time required to complete the procedure. This was a randomized controlled prospective study conducted at Department of Surgery, Ayub Hospital Complex Abbottabad, from 1st June 2007 to 31st May 2008. Randomization was done into two groups randomly using sealed envelopes containing the questionnaire. Seventy envelopes were kept in the cupboard, containing 35 proformas for group A and 35 for group B. An envelope was randomly fetched and opened upon selection of the patient after taking the informed consent. Pneumoperitonium was created by closed technique in group A, and by open technique in group B. Time required for successful pneumoperitonium was calculated in each group. Failure to induce pneumoperitonium was determined for each technique. Time required to close the wounds at completion, total operating time and injuries sustained during induction of pneumoperitonium were compared in both techniques. Out of the total 70 patients included in study, 35 were in group A and 35 in group B. Mean time required for successful pneumoperitonium was 9.17 minutes in group A and 8.11 minutes in group B. Total operating time ranged from 55 minutes to 130 minutes in group A and from 45 minutes to 110 minutes in group B. Mean of total operating time was 78.34 and 67 minutes in group A and B respectively. Mean time needed to close the wound was 9.88 minutes in group A and 4.97 minutes in group B. Failure of technique was noted in three patients in group A while no failure was experienced in group B. In two cases in group A minor complications during creation of pneumoperitonium were observed while in group B no complication occurred. No patient died in the study. We concluded from this study that open technique of pneumoperitonium was, less time consuming and safer than the closed technique.
Selection dynamic of Escherichia coli host in M13 combinatorial peptide phage display libraries.
Zanconato, Stefano; Minervini, Giovanni; Poli, Irene; De Lucrezia, Davide
2011-01-01
Phage display relies on an iterative cycle of selection and amplification of random combinatorial libraries to enrich the initial population of those peptides that satisfy a priori chosen criteria. The effectiveness of any phage display protocol depends directly on library amino acid sequence diversity and the strength of the selection procedure. In this study we monitored the dynamics of the selective pressure exerted by the host organism on a random peptide library in the absence of any additional selection pressure. The results indicate that sequence censorship exerted by Escherichia coli dramatically reduces library diversity and can significantly impair phage display effectiveness.
The development of selective attention and inhibition in NICU graduates during the preschool years.
Kittler, Phyllis M; Gardner, Judith M; Lennon, Elizabeth M; Flory, Michael J; Mayes, Linda C; Karmel, Bernard Z
2011-01-01
Neonatal intensive care unit (NICU) graduates have a higher incidence of attention problems including attention deficit hyperactivity disorder (ADHD). Thus, we examined the effect of risk factors (birth weight (BW), central nervous system (CNS) injury, gender, maternal education) on attention/inhibition during reaction time, continuous performance and Go/No-Go tasks at 42, 51, and 60 months (n = 271). Very low BW NICU graduates (<1,500 g) performed worse than typical BW ones (>2,500 g), displaying poorer target/non-target discrimination. Males responded faster than females, but made more false alarms and random responses. Despite short duration tasks, attention waned. Performance improved with age, but even at 60 months children had difficulty inhibiting random responding.
Valenzuela, Carlos Y
2013-01-01
The Neutral Theory of Evolution (NTE) proposes mutation and random genetic drift as the most important evolutionary factors. The most conspicuous feature of evolution is the genomic stability during paleontological eras and lack of variation among taxa; 98% or more of nucleotide sites are monomorphic within a species. NTE explains this homology by random fixation of neutral bases and negative selection (purifying selection) that does not contribute either to evolution or polymorphisms. Purifying selection is insufficient to account for this evolutionary feature and the Nearly-Neutral Theory of Evolution (N-NTE) included negative selection with coefficients as low as mutation rate. These NTE and N-NTE propositions are thermodynamically (tendency to random distributions, second law), biotically (recurrent mutation), logically and mathematically (resilient equilibria instead of fixation by drift) untenable. Recurrent forward and backward mutation and random fluctuations of base frequencies alone in a site make life organization and fixations impossible. Drift is not a directional evolutionary factor, but a directional tendency of matter-energy processes (second law) which threatens the biotic organization. Drift cannot drive evolution. In a site, the mutation rates among bases and selection coefficients determine the resilient equilibrium frequency of bases that genetic drift cannot change. The expected neutral random interaction among nucleotides is zero; however, huge interactions and periodicities were found between bases of dinucleotides separated by 1, 2... and more than 1,000 sites. Every base is co-adapted with the whole genome. Neutralists found that neutral evolution is independent of population size (N); thus neutral evolution should be independent of drift, because drift effect is dependent upon N. Also, chromosome size and shape as well as protein size are far from random.
How preview space/time translates into preview cost/benefit for fixation durations during reading.
Kliegl, Reinhold; Hohenstein, Sven; Yan, Ming; McDonald, Scott A
2013-01-01
Eye-movement control during reading depends on foveal and parafoveal information. If the parafoveal preview of the next word is suppressed, reading is less efficient. A linear mixed model (LMM) reanalysis of McDonald (2006) confirmed his observation that preview benefit may be limited to parafoveal words that have been selected as the saccade target. Going beyond the original analyses, in the same LMM, we examined how the preview effect (i.e., the difference in single-fixation duration, SFD, between random-letter and identical preview) depends on the gaze duration on the pretarget word and on the amplitude of the saccade moving the eye onto the target word. There were two key results: (a) The shorter the saccade amplitude (i.e., the larger preview space), the shorter a subsequent SFD with an identical preview; this association was not observed with a random-letter preview. (b) However, the longer the gaze duration on the pretarget word, the longer the subsequent SFD on the target, with the difference between random-letter string and identical previews increasing with preview time. A third pattern-increasing cost of a random-letter string in the parafovea associated with shorter saccade amplitudes-was observed for target gaze durations. Thus, LMMs revealed that preview effects, which are typically summarized under "preview benefit", are a complex mixture of preview cost and preview benefit and vary with preview space and preview time. The consequence for reading is that parafoveal preview may not only facilitate, but also interfere with lexical access.
Kloeckner, Roman; Ruckes, Christian; Kronfeld, Kai; Wörns, Marcus Alexander; Weinmann, Arndt; Galle, Peter Robert; Lang, Hauke; Otto, Gerd; Eichhorn, Waltraud; Schreckenberger, Mathias; Dueber, Christoph; Pitton, Michael Bernhard
2014-08-06
Cholangiocellular carcinoma is the second most common primary liver cancer after hepatocellular carcinoma. Over the last 30 years, the incidence of intrahepatic cholangiocellular carcinoma has risen continuously worldwide. Meanwhile, the intrahepatic cholangiocellular carcinoma has become more common than the extrahepatic growth type and currently accounts for 10-15% of all primary hepatic malignancies. Intrahepatic cholangiocellular carcinoma is typically diagnosed in advanced stages due to late clinical symptoms and an absence of classic risk factors. A late diagnosis precludes curative surgical resection. There is evidence that transarterial chemoembolization leads to better local tumor control and prolongs survival compared to systemic chemotherapy. New data indicates that selective internal radiotherapy, also referred to as radioembolization, provides promising results for treating intrahepatic cholangiocellular carcinoma. This pilot study is a randomized, controlled, single center, phase II trial. Twenty-four patients with intrahepatic cholangiocellular carcinoma will be randomized in a 1:1 ratio to receive either chemoembolization or radioembolization. Randomization will be stratified according to tumor load. Progression-free survival is the primary endpoint; overall survival and time to progression are secondary endpoints. To evaluate treatment success, patients will receive contrast enhanced magnetic resonance imaging every 3 months. Currently, chemoembolization is routinely performed in many centers instead of systemic chemotherapy for treating intrahepatic cholangiocellular carcinoma confined to the liver. Recently, radioembolization has been increasingly applied to cholangiocellular carcinoma as second line therapy after TACE failure or even as an alternative first line therapy. Nonetheless, no randomized studies have compared radioembolization and chemoembolization. Considering all this background information, we recognized a strong need for a randomized controlled trial (RCT) to compare the two treatments. Therefore, the present protocol describes the design of a RCT that compares SIRT and TACE as the first line therapy for inoperable CCC confined to the liver. ClinicalTrials.gov, Identifier: NCT01798147, registered 16th of February 2013.
Nergiz, Humeyra; Tabur, Mehmet Ali; Ayvaz, Yusuf
2013-08-01
Diurnal time-activity budgets of White-headed Ducks were investigated with respect to sex and temporal environmental variables to document behavioral responses to winter conditions and nutritional requirements at Burdur Lake where the largest winter concentrations occur. Behaviors of males and females were recorded separately in randomly selected focal flocks during 1140 sessions. For the entire population a large proportion of time was spent resting. During the day they spent 61% of time resting, 22% feeding, 12% comfort and 5% in locomotion. Resting peaked in the middle of day while feeding was observed frequently in evening and morning. Time use did not differ significantly between sexes. However, it was detected that more time was spent feeding during windy days as wave-height increased.
Cooperation and charity in spatial public goods game under different strategy update rules
NASA Astrophysics Data System (ADS)
Li, Yixiao; Jin, Xiaogang; Su, Xianchuang; Kong, Fansheng; Peng, Chengbin
2010-03-01
Human cooperation can be influenced by other human behaviors and recent years have witnessed the flourishing of studying the coevolution of cooperation and punishment, yet the common behavior of charity is seldom considered in game-theoretical models. In this article, we investigate the coevolution of altruistic cooperation and egalitarian charity in spatial public goods game, by considering charity as the behavior of reducing inter-individual payoff differences. Our model is that, in each generation of the evolution, individuals play games first and accumulate payoff benefits, and then each egalitarian makes a charity donation by payoff transfer in its neighborhood. To study the individual-level evolutionary dynamics, we adopt different strategy update rules and investigate their effects on charity and cooperation. These rules can be classified into two global rules: random selection rule in which individuals randomly update strategies, and threshold selection rule where only those with payoffs below a threshold update strategies. Simulation results show that random selection enhances the cooperation level, while threshold selection lowers the threshold of the multiplication factor to maintain cooperation. When charity is considered, it is incapable in promoting cooperation under random selection, whereas it promotes cooperation under threshold selection. Interestingly, the evolution of charity strongly depends on the dispersion of payoff acquisitions of the population, which agrees with previous results. Our work may shed light on understanding human egalitarianism.
Adam, Asrul; Shapiai, Mohd Ibrahim; Tumari, Mohd Zaidi Mohd; Mohamad, Mohd Saberi; Mubin, Marizan
2014-01-01
Electroencephalogram (EEG) signal peak detection is widely used in clinical applications. The peak point can be detected using several approaches, including time, frequency, time-frequency, and nonlinear domains depending on various peak features from several models. However, there is no study that provides the importance of every peak feature in contributing to a good and generalized model. In this study, feature selection and classifier parameters estimation based on particle swarm optimization (PSO) are proposed as a framework for peak detection on EEG signals in time domain analysis. Two versions of PSO are used in the study: (1) standard PSO and (2) random asynchronous particle swarm optimization (RA-PSO). The proposed framework tries to find the best combination of all the available features that offers good peak detection and a high classification rate from the results in the conducted experiments. The evaluation results indicate that the accuracy of the peak detection can be improved up to 99.90% and 98.59% for training and testing, respectively, as compared to the framework without feature selection adaptation. Additionally, the proposed framework based on RA-PSO offers a better and reliable classification rate as compared to standard PSO as it produces low variance model.
Advanced reliability methods for structural evaluation
NASA Technical Reports Server (NTRS)
Wirsching, P. H.; Wu, Y.-T.
1985-01-01
Fast probability integration (FPI) methods, which can yield approximate solutions to such general structural reliability problems as the computation of the probabilities of complicated functions of random variables, are known to require one-tenth the computer time of Monte Carlo methods for a probability level of 0.001; lower probabilities yield even more dramatic differences. A strategy is presented in which a computer routine is run k times with selected perturbed values of the variables to obtain k solutions for a response variable Y. An approximating polynomial is fit to the k 'data' sets, and FPI methods are employed for this explicit form.
Communication methods, systems, apparatus, and devices involving RF tag registration
Burghard, Brion J [W. Richland, WA; Skorpik, James R [Kennewick, WA
2008-04-22
One technique of the present invention includes a number of Radio Frequency (RF) tags that each have a different identifier. Information is broadcast to the tags from an RF tag interrogator. This information corresponds to a maximum quantity of tag response time slots that are available. This maximum quantity may be less than the total number of tags. The tags each select one of the time slots as a function of the information and a random number provided by each respective tag. The different identifiers are transmitted to the interrogator from at least a subset of the RF tags.
Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina
2017-01-01
A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.
Accuracy of genomic selection in European maize elite breeding populations.
Zhao, Yusheng; Gowda, Manje; Liu, Wenxin; Würschum, Tobias; Maurer, Hans P; Longin, Friedrich H; Ranc, Nicolas; Reif, Jochen C
2012-03-01
Genomic selection is a promising breeding strategy for rapid improvement of complex traits. The objective of our study was to investigate the prediction accuracy of genomic breeding values through cross validation. The study was based on experimental data of six segregating populations from a half-diallel mating design with 788 testcross progenies from an elite maize breeding program. The plants were intensively phenotyped in multi-location field trials and fingerprinted with 960 SNP markers. We used random regression best linear unbiased prediction in combination with fivefold cross validation. The prediction accuracy across populations was higher for grain moisture (0.90) than for grain yield (0.58). The accuracy of genomic selection realized for grain yield corresponds to the precision of phenotyping at unreplicated field trials in 3-4 locations. As for maize up to three generations are feasible per year, selection gain per unit time is high and, consequently, genomic selection holds great promise for maize breeding programs.
Bondalapati, Somasekhar; Ruvinov, Emil; Kryukov, Olga; Cohen, Smadar; Brik, Ashraf
2014-09-15
Polysaccharides have emerged as important functional materials because of their unique properties such as biocompatibility, biodegradability, and availability of reactive sites for chemical modifications to optimize their properties. The overwhelming majority of the methods to modify polysaccharides employ random chemical modifications, which often improve certain properties while compromising others. On the other hand, the employed methods for selective modifications often require excess of coupling partners, long reaction times and are limited in their scope and wide applicability. To circumvent these drawbacks, aniline-catalyzed oxime formation is developed for selective modification of a variety of polysaccharides through their reducing end. Notably, it is found that for efficient oxime formation, different conditions are required depending on the composition of the specific polysaccharide. It is also shown how our strategy can be applied to improve the physical and functional properties of alginate hydrogels, which are widely used in tissue engineering and regenerative medicine applications. While the randomly and selectively modified alginate exhibits similar viscoelastic properties, the latter forms significantly more stable hydrogel and superior cell adhesive and functional properties. Our results show that the developed conjugation reaction is robust and should open new opportunities for preparing polysaccharide-based functional materials with unique properties. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Mazurowski, Maciej A; Zurada, Jacek M; Tourassi, Georgia D
2009-07-01
Ensemble classifiers have been shown efficient in multiple applications. In this article, the authors explore the effectiveness of ensemble classifiers in a case-based computer-aided diagnosis system for detection of masses in mammograms. They evaluate two general ways of constructing subclassifiers by resampling of the available development dataset: Random division and random selection. Furthermore, they discuss the problem of selecting the ensemble size and propose two adaptive incremental techniques that automatically select the size for the problem at hand. All the techniques are evaluated with respect to a previously proposed information-theoretic CAD system (IT-CAD). The experimental results show that the examined ensemble techniques provide a statistically significant improvement (AUC = 0.905 +/- 0.024) in performance as compared to the original IT-CAD system (AUC = 0.865 +/- 0.029). Some of the techniques allow for a notable reduction in the total number of examples stored in the case base (to 1.3% of the original size), which, in turn, results in lower storage requirements and a shorter response time of the system. Among the methods examined in this article, the two proposed adaptive techniques are by far the most effective for this purpose. Furthermore, the authors provide some discussion and guidance for choosing the ensemble parameters.
Speier, P L; Mélèse-D'Hospital, I A; Tschann, J M; Moore, P J; Adler, N E
1997-01-01
To test the hypothesis that ego development would predict contraceptive use. Problems in ego development were defined in terms of three factors: (1) realism, (2) complexity, and (3) discontinuity. Forty-one respondents aged 14-17 years were selected from a group of 233 adolescents who were administered a projective pregnancy scenario and participated in a 12-month follow-up. Twenty of these adolescents were randomly selected from the group determined to be effective contraceptive users, while 21 were randomly selected from the group of poor contraceptors. Chi-square test revealed a significant association (p < .0005) between the composite ego maturity (EM) measure and contraceptive outcome (chi 2 = 13.82, with df-1). Low scores on the ego maturity measure predicted poor contraceptive use. EM was unrelated to age but was associated with race (chi 2 = 7.535, .025 < p < .05). However, EM predicted contraceptive use when controlling for the effects of race. A simple, time-efficient projective pregnancy scenario is an effective way of determining adolescent females at risk for poor contraceptive effectiveness and, therefore, untimely pregnancy. These stories are analyzed using factors related to the ego development of the adolescent. Subjects who scored lower on this measure have poor contraceptive effectiveness while subjects with higher levels demonstrated effective contraception use, at 1-year follow-up.
Assessment of pharmaceutical waste management at selected hospitals and homes in Ghana.
Sasu, Samuel; Kümmerer, Klaus; Kranert, Martin
2012-06-01
The practice of use and disposal of waste from pharmaceuticals compromises the safety of the environment as well as representing a serious health risk, as they may accumulate and stay active for a long time in the aquatic environment. This article therefore presents the outcome of a study on pharmaceutical waste management practices at homes and hospitals in Ghana. The study was conducted at five healthcare institutions randomly selected in Ghana, namely two teaching hospitals (hospital A, hospital B), one regional hospital (hospital C), one district hospital (hospital D) and one quasi-governmental hospital (hospital E). Apart from hospital E which currently has a pharmaceutical waste separation programmr as well as drug return programme called DUMP (Disposal of Unused Medicines Program), all other hospitals visited do not have any separate collection and disposal programme for pharmaceutical waste. A survey was also carried out among the general public, involving the questioning of randomly selected participants in order to investigate the household disposal of unused and expired pharmaceuticals. The results from the survey showed that more than half of the respondents confirmed having unused, left-over or expired medicines at home and over 75% disposed of pharmaceutical waste through the normal waste bins which end up in the landfills or dump sites.
Le, Trang T; Simmons, W Kyle; Misaki, Masaya; Bodurka, Jerzy; White, Bill C; Savitz, Jonathan; McKinney, Brett A
2017-09-15
Classification of individuals into disease or clinical categories from high-dimensional biological data with low prediction error is an important challenge of statistical learning in bioinformatics. Feature selection can improve classification accuracy but must be incorporated carefully into cross-validation to avoid overfitting. Recently, feature selection methods based on differential privacy, such as differentially private random forests and reusable holdout sets, have been proposed. However, for domains such as bioinformatics, where the number of features is much larger than the number of observations p≫n , these differential privacy methods are susceptible to overfitting. We introduce private Evaporative Cooling, a stochastic privacy-preserving machine learning algorithm that uses Relief-F for feature selection and random forest for privacy preserving classification that also prevents overfitting. We relate the privacy-preserving threshold mechanism to a thermodynamic Maxwell-Boltzmann distribution, where the temperature represents the privacy threshold. We use the thermal statistical physics concept of Evaporative Cooling of atomic gases to perform backward stepwise privacy-preserving feature selection. On simulated data with main effects and statistical interactions, we compare accuracies on holdout and validation sets for three privacy-preserving methods: the reusable holdout, reusable holdout with random forest, and private Evaporative Cooling, which uses Relief-F feature selection and random forest classification. In simulations where interactions exist between attributes, private Evaporative Cooling provides higher classification accuracy without overfitting based on an independent validation set. In simulations without interactions, thresholdout with random forest and private Evaporative Cooling give comparable accuracies. We also apply these privacy methods to human brain resting-state fMRI data from a study of major depressive disorder. Code available at http://insilico.utulsa.edu/software/privateEC . brett-mckinney@utulsa.edu. Supplementary data are available at Bioinformatics online. © The Author (2017). Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com
Engelen, Lina; Bundy, Anita C; Bauman, Adrian; Naughton, Geraldine; Wyver, Shirley; Baur, Louise
2015-01-01
Children can spend substantial amounts of leisure time in sedentary activities, dominated by TV/screen time. However, objective real-time measurement of activities after school among young school children is seldom described. School children (n = 246, 5-7 years old, mean 6.0) and their parents were recruited by random selection from 14 schools across Sydney, Australia. Parents used a real-time objective measure (Experience Sampling Method, ESM) to record children's activities and whether they were indoors or outdoors at 3 random times each day after school. Data were collected across 4 weekdays in 1 week and then, 13 weeks later, another 4 weekdays in 1 week. Results were based on 2940 responses from 214 child-parent dyads showed that 25% of behavior involved physical activity, 51% was spent in sedentary activities, and 22% was TV/screen time. Most instances (81%) occurred indoors. Despite a high proportion of TV/screen time, children were also engaged in a range of other sedentary and physically active pursuits after school. Hence TV/screen time is not a suitable proxy for all sedentary behavior, and it is important to gather information on other non-screen-based sedentary and physically active behaviors. Future research is warranted to further investigate after-school activities in young primary school children.
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-01-01
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods. PMID:27258276
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-05-31
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.
Martinez, J C; Caprio, M A
2016-03-27
Recent detection of western corn rootworm resistance to Bt (Bacillus thuringiensis) corn prompted recommendations for the use of integrated pest management (IPM) with planting refuges to prolong the durability of Bt technologies. We conducted a simulation experiment exploring the effectiveness of various IPM tools at extending durability of pyramided Bt traits. Results indicate that some IPM practices have greater merits than others. Crop rotation was the most effective strategy, followed by increasing the non-Bt refuge size from 5 to 20%. Soil-applied insecticide use for Bt corn did not increase the durability compared with planting Bt with refuges alone, and both projected lower durabilities. When IPM participation with randomly selected management tools was increased at the time of Bt commercialization, durability of pyramided traits increased as well. When non-corn rootworm expressing corn was incorporated as an IPM option, the durability further increased.For corn rootworm, a local resistance phenomenon appeared immediately surrounding the resistant field (hotspot) and spread throughout the local neighborhood in six generations in absence of mitigation. Hotspot mitigation with random selection of strategies was ineffective at slowing resistance, unless crop rotation occurred immediately; regional mitigation was superior to random mitigation in the hotspot and reduced observed resistance allele frequencies in the neighborhood. As resistance alleles of mobile pests can escape hotspots, the scope of mitigation should extend beyond resistant sites. In the case of widespread resistance, regional mitigation was less effective at prolonging the life of the pyramid than IPM with Bt deployment at the time of commercialization. Published by Oxford University Press on behalf of Entomological Society of America 2016. This work is written by US Government employees and is in the public domain in the United States.
Kushner, Pamela R; Snoddy, Andrew M; Gilderman, Larry; Peura, David A
2009-07-01
To investigate the efficacy and safety of a 14-day treatment period with lansoprazole 15 mg for frequent heartburn in patients who are likely to select a nonprescription medication before consulting a prescriber. Adults with untreated frequent heartburn > or = 2 days a week over the past month were recruited for 2 identical multicenter, double-blind studies conducted with a 1-week screening and heartburn medication washout, a 1-week placebo run-in, a 2-week placebo-controlled treatment, and a 1-week placebo follow-up. After the washout and placebo run-in, subjects were randomly assigned to receive lansoprazole 15 mg or placebo once daily for 14 days in a double-blind fashion. Antacid tablets were permitted as rescue medication. Endpoints included percentage of 24-hour days without heartburn (primary), percentage of night-times without heartburn, and percentage of subjects without heartburn during day 1 of treatment (secondary endpoints). Data were collected daily via an interactive voice response system. In studies 1 and 2, 282 and 288 subjects, respectively, were randomly assigned to lansoprazole, and 282 in each study received placebo. The mean percentage of days without heartburn was greater among lansoprazole recipients compared with placebo recipients (P < 0.0001). Significantly more subjects treated with lansoprazole also reported no night-time heartburn and no heartburn during day 1 of the 14-day treatment. Adverse events were infrequent and were similar for lansoprazole and placebo groups. During the 14-day treatment period in a population with frequent heartburn who were likely to select a medication without consulting a prescriber, lansoprazole 15 mg once daily showed rapid and sustained effectiveness throughout a 24-hour period and was well tolerated.
Comparisons of population subgroups performance on a keyboard psychomotor task
NASA Technical Reports Server (NTRS)
Stapleford, R. L.
1973-01-01
Response time and pass/fail data were obtained from 163 subjects performing a psychomotor task. The basic task comprised a random five digit number briefly displayed to the subject at the start of each trail, and the keyboard on which the subject was to enter the number as fast as he could accurately do so after the display was extinguished. Some tests were run with the addition of a secondary task which required the subject to respond to a displayed light appearing at a random time. Matched pairs of subjects were selected from the group to analyze the effects of age, sex, intelligence, prior keyboard skill, and drinking habits. There was little or no effect due to age or drinking habits. Differences in response time were: average IQ subjects faster than low IQ subjects by 0.5 to 0.6 sec; subjects with prior keyboard skill faster by 0.4 to 0.5 sec; and female subjects faster by 0.2 to 0.3 sec. These effects were generally insensitive to the presence of the secondary task.
Lin, Pei-Jung; Concannon, Thomas W; Greenberg, Dan; Cohen, Joshua T; Rossi, Gregory; Hille, Jeffrey; Auerbach, Hannah R; Fang, Chi-Hui; Nadler, Eric S; Neumann, Peter J
2013-08-01
To investigate the relationship between the framing of survival gains and the perceived value of cancer care. Through a population-based survey of 2040 US adults, respondents were randomized to one of the two sets of hypothetical scenarios, each of which described the survival benefit for a new treatment as either an increase in median survival time (median survival), or an increase in the probability of survival for a given length of time (landmark survival). Each respondent was presented with two randomly selected scenarios with different prognosis and survival improvements, and asked about their willingness to pay (WTP) for the new treatments. Predicted WTP increased with survival benefits and respondents' income, regardless of how survival benefits were described. Framing therapeutic benefits as improvements in landmark rather than median time survival increased the proportion of the population willing to pay for that gain by 11-35%, and the mean WTP amount by 42-72% in the scenarios we compared. How survival benefits are described may influence the value people place on cancer care.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Johnson, Traci L.; Sharon, Keren, E-mail: tljohn@umich.edu
Until now, systematic errors in strong gravitational lens modeling have been acknowledged but have never been fully quantified. Here, we launch an investigation into the systematics induced by constraint selection. We model the simulated cluster Ares 362 times using random selections of image systems with and without spectroscopic redshifts and quantify the systematics using several diagnostics: image predictability, accuracy of model-predicted redshifts, enclosed mass, and magnification. We find that for models with >15 image systems, the image plane rms does not decrease significantly when more systems are added; however, the rms values quoted in the literature may be misleading asmore » to the ability of a model to predict new multiple images. The mass is well constrained near the Einstein radius in all cases, and systematic error drops to <2% for models using >10 image systems. Magnification errors are smallest along the straight portions of the critical curve, and the value of the magnification is systematically lower near curved portions. For >15 systems, the systematic error on magnification is ∼2%. We report no trend in magnification error with the fraction of spectroscopic image systems when selecting constraints at random; however, when using the same selection of constraints, increasing this fraction up to ∼0.5 will increase model accuracy. The results suggest that the selection of constraints, rather than quantity alone, determines the accuracy of the magnification. We note that spectroscopic follow-up of at least a few image systems is crucial because models without any spectroscopic redshifts are inaccurate across all of our diagnostics.« less
Selecting at-risk populations for sexually transmitted disease/HIV intervention studies.
Wu, Zunyou; Rotheram-Borus, Mary Jane; Detels, Roger; Li, Li; Guan, Jihui; Liang, Guojun; Yap, Lorraine
2007-12-01
This paper describes one option to select populations for randomized, controlled trials (RCT). We used a popular opinion leader intervention in Fuzhou, China, to: (1) identify population selection criteria; (2) systematically examine the suitability of potential target populations and settings; (3) briefly evaluate risk and stability in the population; and (4) evaluate regional and organizational support among administrators and government officials. After comparing migrant villagers, truck drivers, factory workers, construction workers, and market employees in five regions of China, market employees in Fuzhou were identified as the optimal target population. Markets were the optimal sites for several reasons: (1) the population demonstrated a sufficient base rate of sexually transmitted diseases; (2) the population was stable over time; (3) a sufficient number of sites of manageable sizes were available; (4) stable networks existed; (5) local gatekeepers/stakeholders supported the intervention; (6) there was organizational capacity in the local health department to mount the intervention; (7) the demographic profile was similar across potential sites; and (8) the sites were sufficiently distanced to minimize contamination. Evaluating intervention efficacy in an RCT requires a time-consuming and rigorous process that systematically and routinely documents selection criteria, evaluates multiple populations, sites, and organizations for their appropriateness.
Skrivanek, Zachary; Berry, Scott; Berry, Don; Chien, Jenny; Geiger, Mary Jane; Anderson, James H.; Gaydos, Brenda
2012-01-01
Background Dulaglutide (dula, LY2189265), a long-acting glucagon-like peptide-1 analog, is being developed to treat type 2 diabetes mellitus. Methods To foster the development of dula, we designed a two-stage adaptive, dose-finding, inferentially seamless phase 2/3 study. The Bayesian theoretical framework is used to adaptively randomize patients in stage 1 to 7 dula doses and, at the decision point, to either stop for futility or to select up to 2 dula doses for stage 2. After dose selection, patients continue to be randomized to the selected dula doses or comparator arms. Data from patients assigned the selected doses will be pooled across both stages and analyzed with an analysis of covariance model, using baseline hemoglobin A1c and country as covariates. The operating characteristics of the trial were assessed by extensive simulation studies. Results Simulations demonstrated that the adaptive design would identify the correct doses 88% of the time, compared to as low as 6% for a fixed-dose design (the latter value based on frequentist decision rules analogous to the Bayesian decision rules for adaptive design). Conclusions This article discusses the decision rules used to select the dula dose(s); the mathematical details of the adaptive algorithm—including a description of the clinical utility index used to mathematically quantify the desirability of a dose based on safety and efficacy measurements; and a description of the simulation process and results that quantify the operating characteristics of the design. PMID:23294775
Optimization Of Mean-Semivariance-Skewness Portfolio Selection Model In Fuzzy Random Environment
NASA Astrophysics Data System (ADS)
Chatterjee, Amitava; Bhattacharyya, Rupak; Mukherjee, Supratim; Kar, Samarjit
2010-10-01
The purpose of the paper is to construct a mean-semivariance-skewness portfolio selection model in fuzzy random environment. The objective is to maximize the skewness with predefined maximum risk tolerance and minimum expected return. Here the security returns in the objectives and constraints are assumed to be fuzzy random variables in nature and then the vagueness of the fuzzy random variables in the objectives and constraints are transformed into fuzzy variables which are similar to trapezoidal numbers. The newly formed fuzzy model is then converted into a deterministic optimization model. The feasibility and effectiveness of the proposed method is verified by numerical example extracted from Bombay Stock Exchange (BSE). The exact parameters of fuzzy membership function and probability density function are obtained through fuzzy random simulating the past dates.
Mindfulness meditation for insomnia: A meta-analysis of randomized controlled trials.
Gong, Hong; Ni, Chen-Xu; Liu, Yun-Zi; Zhang, Yi; Su, Wen-Jun; Lian, Yong-Jie; Peng, Wei; Jiang, Chun-Lei
2016-10-01
Insomnia is a widespread and debilitating condition that affects sleep quality and daily productivity. Although mindfulness meditation (MM) has been suggested as a potentially effective supplement to medical treatment for insomnia, no comprehensively quantitative research has been conducted in this field. Therefore, we performed a meta-analysis on the findings of related randomized controlled trials (RCTs) to evaluate the effects of MM on insomnia. Related publications in PubMed, EMBASE, the Cochrane Library and PsycINFO were searched up to July 2015. To calculate the standardized mean differences (SMDs) and 95% confidence intervals (CIs), we used a fixed effect model when heterogeneity was negligible and a random effect model when heterogeneity was significant. A total of 330 participants in 6 RCTs that met the selection criteria were included in this meta-analysis. Analysis of overall effect revealed that MM significantly improved total wake time and sleep quality, but had no significant effects on sleep onset latency, total sleep time, wake after sleep onset, sleep efficiency, total wake time, ISI, PSQI and DBAS. Subgroup analyses showed that although there were no significant differences between MM and control groups in terms of total sleep time, significant effects were found in total wake time, sleep onset latency, sleep quality, sleep efficiency, and PSQI global score (absolute value of SMD range: 0.44-1.09, all p<0.05). The results suggest that MM may mildly improve some sleep parameters in patients with insomnia. MM can serve as an auxiliary treatment to medication for sleep complaints. Copyright © 2016 Elsevier Inc. All rights reserved.
Vrijheid, Martine; Deltour, Isabelle; Krewski, Daniel; Sanchez, Marie; Cardis, Elisabeth
2006-07-01
This paper examines the effects of systematic and random errors in recall and of selection bias in case-control studies of mobile phone use and cancer. These sensitivity analyses are based on Monte-Carlo computer simulations and were carried out within the INTERPHONE Study, an international collaborative case-control study in 13 countries. Recall error scenarios simulated plausible values of random and systematic, non-differential and differential recall errors in amount of mobile phone use reported by study subjects. Plausible values for the recall error were obtained from validation studies. Selection bias scenarios assumed varying selection probabilities for cases and controls, mobile phone users, and non-users. Where possible these selection probabilities were based on existing information from non-respondents in INTERPHONE. Simulations used exposure distributions based on existing INTERPHONE data and assumed varying levels of the true risk of brain cancer related to mobile phone use. Results suggest that random recall errors of plausible levels can lead to a large underestimation in the risk of brain cancer associated with mobile phone use. Random errors were found to have larger impact than plausible systematic errors. Differential errors in recall had very little additional impact in the presence of large random errors. Selection bias resulting from underselection of unexposed controls led to J-shaped exposure-response patterns, with risk apparently decreasing at low to moderate exposure levels. The present results, in conjunction with those of the validation studies conducted within the INTERPHONE study, will play an important role in the interpretation of existing and future case-control studies of mobile phone use and cancer risk, including the INTERPHONE study.
Lipman, Grant S; Sharp, Louis J; Christensen, Mark; Phillips, Caleb; DiTullio, Alexandra; Dalton, Andrew; Ng, Pearlly; Shangkuan, Jennifer; Shea, Katherine; Krabak, Brian J
2016-09-01
To determine whether paper tape prevents foot blisters in multistage ultramarathon runners. Multisite prospective randomized trial. The 2014 250-km (155-mile) 6-stage RacingThePlanet ultramarathons in Jordan, Gobi, Madagascar, and Atacama Deserts. One hundred twenty-eight participants were enrolled: 19 (15%) from the Jordan, 35 (27%) from Gobi, 21 (16%) from Madagascar, and 53 (41%) from the Atacama Desert. The mean age was 39.3 years (22-63) and body mass index was 24.2 kg/m (17.4-35.1), with 31 (22.5%) females. Paper tape was applied to a randomly selected foot before the race, either to participants' blister-prone areas or randomly selected location if there was no blister history, with untaped areas of the same foot used as the control. Development of a blister anywhere on the study foot. One hundred six (83%) participants developed 117 blisters, with treatment success in 98 (77%) runners. Paper tape reduced blisters by 40% (P < 0.01, 95% confidence interval, 28-52) with a number needed to treat of 1.31. Most of the study participants had 1 blister (78%), with most common locations on the toes (n = 58, 50%) and heel (n = 27, 23%), with 94 (80%) blisters occurring by the end of stage 2. Treatment success was associated with earlier stages [odds ratio (OR), 74.9, P < 0.01] and time spent running (OR, 0.66, P = 0.01). Paper tape was found to prevent both the incidence and frequency of foot blisters in runners.
Kassaian, Nazila; Aminorroaya, Ashraf; Feizi, Awat; Jafari, Parvaneh; Amini, Masoud
2017-03-29
The incidence of type 2 diabetes, cardiovascular diseases, and obesity has been rising dramatically; however, their pathogenesis is particularly intriguing. Recently, dysbiosis of the intestinal microbiota has emerged as a new candidate that may be linked to metabolic diseases. We hypothesize that selective modulation of the intestinal microbiota by probiotic or synbiotic supplementation may improve metabolic dysfunction and prevent diabetes in prediabetics. In this study, a synthesis and study of synbiotics will be carried out for the first time in Iran. In a randomized triple-blind controlled clinical trial, 120 adults with impaired glucose tolerance based on the inclusion criteria will be selected by a simple random sampling method and will be randomly allocated to 6 months of 6 g/d probiotic, synbiotic or placebo. The fecal abundance of bacteria, blood pressure, height, weight, and waist and hip circumferences will be measured at baseline and following treatment. Also, plasma lipid profiles, HbA1C, fasting plasma glucose, and insulin levels, will be measured and insulin resistance (HOMA-IR) and beta-cell function (HOMA-B) will be calculated at baseline and will be repeated at months 3, 6, 12, and 18. The data will be compared within and between groups using statistical methods. The results of this trial could contribute to the evidence-based clinical guidelines that address gut microbiota manipulation to maximize health benefits in prevention and management of metabolic syndrome in prediabetes. Iranian Registry of Clinical Trials: IRCT201511032321N2 . Registered on 27 February 2016.
NASA Astrophysics Data System (ADS)
Tehrany, M. Sh.; Jones, S.
2017-10-01
This paper explores the influence of the extent and density of the inventory data on the final outcomes. This study aimed to examine the impact of different formats and extents of the flood inventory data on the final susceptibility map. An extreme 2011 Brisbane flood event was used as the case study. LR model was applied using polygon and point formats of the inventory data. Random points of 1000, 700, 500, 300, 100 and 50 were selected and susceptibility mapping was undertaken using each group of random points. To perform the modelling Logistic Regression (LR) method was selected as it is a very well-known algorithm in natural hazard modelling due to its easily understandable, rapid processing time and accurate measurement approach. The resultant maps were assessed visually and statistically using Area under Curve (AUC) method. The prediction rates measured for susceptibility maps produced by polygon, 1000, 700, 500, 300, 100 and 50 random points were 63 %, 76 %, 88 %, 80 %, 74 %, 71 % and 65 % respectively. Evidently, using the polygon format of the inventory data didn't lead to the reasonable outcomes. In the case of random points, raising the number of points consequently increased the prediction rates, except for 1000 points. Hence, the minimum and maximum thresholds for the extent of the inventory must be set prior to the analysis. It is concluded that the extent and format of the inventory data are also two of the influential components in the precision of the modelling.
Q-nexus: a comprehensive and efficient analysis pipeline designed for ChIP-nexus.
Hansen, Peter; Hecht, Jochen; Ibn-Salem, Jonas; Menkuec, Benjamin S; Roskosch, Sebastian; Truss, Matthias; Robinson, Peter N
2016-11-04
ChIP-nexus, an extension of the ChIP-exo protocol, can be used to map the borders of protein-bound DNA sequences at nucleotide resolution, requires less input DNA and enables selective PCR duplicate removal using random barcodes. However, the use of random barcodes requires additional preprocessing of the mapping data, which complicates the computational analysis. To date, only a very limited number of software packages are available for the analysis of ChIP-exo data, which have not yet been systematically tested and compared on ChIP-nexus data. Here, we present a comprehensive software package for ChIP-nexus data that exploits the random barcodes for selective removal of PCR duplicates and for quality control. Furthermore, we developed bespoke methods to estimate the width of the protected region resulting from protein-DNA binding and to infer binding positions from ChIP-nexus data. Finally, we applied our peak calling method as well as the two other methods MACE and MACS2 to the available ChIP-nexus data. The Q-nexus software is efficient and easy to use. Novel statistics about duplication rates in consideration of random barcodes are calculated. Our method for the estimation of the width of the protected region yields unbiased signatures that are highly reproducible for biological replicates and at the same time very specific for the respective factors analyzed. As judged by the irreproducible discovery rate (IDR), our peak calling algorithm shows a substantially better reproducibility. An implementation of Q-nexus is available at http://charite.github.io/Q/ .
Garvin-Doxas, Kathy
2008-01-01
While researching student assumptions for the development of the Biology Concept Inventory (BCI; http://bioliteracy.net), we found that a wide class of student difficulties in molecular and evolutionary biology appears to be based on deep-seated, and often unaddressed, misconceptions about random processes. Data were based on more than 500 open-ended (primarily) college student responses, submitted online and analyzed through our Ed's Tools system, together with 28 thematic and think-aloud interviews with students, and the responses of students in introductory and advanced courses to questions on the BCI. Students believe that random processes are inefficient, whereas biological systems are very efficient. They are therefore quick to propose their own rational explanations for various processes, from diffusion to evolution. These rational explanations almost always make recourse to a driver, e.g., natural selection in evolution or concentration gradients in molecular biology, with the process taking place only when the driver is present, and ceasing when the driver is absent. For example, most students believe that diffusion only takes place when there is a concentration gradient, and that the mutational processes that change organisms occur only in response to natural selection pressures. An understanding that random processes take place all the time and can give rise to complex and often counterintuitive behaviors is almost totally absent. Even students who have had advanced or college physics, and can discuss diffusion correctly in that context, cannot make the transfer to biological processes, and passing through multiple conventional biology courses appears to have little effect on their underlying beliefs. PMID:18519614
Tian, Ting; McLachlan, Geoffrey J.; Dieters, Mark J.; Basford, Kaye E.
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances. PMID:26689369
NASA Astrophysics Data System (ADS)
Löw, Fabian; Schorcht, Gunther; Michel, Ulrich; Dech, Stefan; Conrad, Christopher
2012-10-01
Accurate crop identification and crop area estimation are important for studies on irrigated agricultural systems, yield and water demand modeling, and agrarian policy development. In this study a novel combination of Random Forest (RF) and Support Vector Machine (SVM) classifiers is presented that (i) enhances crop classification accuracy and (ii) provides spatial information on map uncertainty. The methodology was implemented over four distinct irrigated sites in Middle Asia using RapidEye time series data. The RF feature importance statistics was used as feature-selection strategy for the SVM to assess possible negative effects on classification accuracy caused by an oversized feature space. The results of the individual RF and SVM classifications were combined with rules based on posterior classification probability and estimates of classification probability entropy. SVM classification performance was increased by feature selection through RF. Further experimental results indicate that the hybrid classifier improves overall classification accuracy in comparison to the single classifiers as well as useŕs and produceŕs accuracy.
Hultén, C; Frössling, J; Chenais, E; Sternberg Lewerin, S
2013-10-01
Sweden experienced its first outbreak of bluetongue virus (BTV) infection beginning in September 2008. Mandatory vaccination with an inactivated vaccine (BTVPUR Alsap8; Merial, Lyon, France) began 2 days after bluetongue was confirmed in the country. The aim of this study was to investigate whether the goal of 80% seroconversion by the susceptible population within the vaccination area was met during the initial phase of the Swedish vaccination campaign and whether there were discrepancies between subpopulations. Milk or blood samples were collected from 274 cattle randomly selected from the vaccinated population. Blood samples were also collected from ten ewes on each of 28 randomly selected vaccinated herds. The vaccination campaign in Sweden may be regarded as successful, as measured by apparent seroprevalence in the vaccinated population. The overall apparent seroprevalence was 77%, and in cattle, which constituted the majority of the susceptible population, the apparent seroprevalence was 82%. Factors that influenced the titres after vaccination were as follows: (i) the time span between vaccination and sampling and (ii) the age of the animals. © 2012 Blackwell Verlag GmbH.
Tian, Ting; McLachlan, Geoffrey J; Dieters, Mark J; Basford, Kaye E
2015-01-01
It is a common occurrence in plant breeding programs to observe missing values in three-way three-mode multi-environment trial (MET) data. We proposed modifications of models for estimating missing observations for these data arrays, and developed a novel approach in terms of hierarchical clustering. Multiple imputation (MI) was used in four ways, multiple agglomerative hierarchical clustering, normal distribution model, normal regression model, and predictive mean match. The later three models used both Bayesian analysis and non-Bayesian analysis, while the first approach used a clustering procedure with randomly selected attributes and assigned real values from the nearest neighbour to the one with missing observations. Different proportions of data entries in six complete datasets were randomly selected to be missing and the MI methods were compared based on the efficiency and accuracy of estimating those values. The results indicated that the models using Bayesian analysis had slightly higher accuracy of estimation performance than those using non-Bayesian analysis but they were more time-consuming. However, the novel approach of multiple agglomerative hierarchical clustering demonstrated the overall best performances.
Tehran Air Pollutants Prediction Based on Random Forest Feature Selection Method
NASA Astrophysics Data System (ADS)
Shamsoddini, A.; Aboodi, M. R.; Karami, J.
2017-09-01
Air pollution as one of the most serious forms of environmental pollutions poses huge threat to human life. Air pollution leads to environmental instability, and has harmful and undesirable effects on the environment. Modern prediction methods of the pollutant concentration are able to improve decision making and provide appropriate solutions. This study examines the performance of the Random Forest feature selection in combination with multiple-linear regression and Multilayer Perceptron Artificial Neural Networks methods, in order to achieve an efficient model to estimate carbon monoxide and nitrogen dioxide, sulfur dioxide and PM2.5 contents in the air. The results indicated that Artificial Neural Networks fed by the attributes selected by Random Forest feature selection method performed more accurate than other models for the modeling of all pollutants. The estimation accuracy of sulfur dioxide emissions was lower than the other air contaminants whereas the nitrogen dioxide was predicted more accurate than the other pollutants.
Good, Andrew C; Hermsmeier, Mark A
2007-01-01
Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.
Random graph models for dynamic networks
NASA Astrophysics Data System (ADS)
Zhang, Xiao; Moore, Cristopher; Newman, Mark E. J.
2017-10-01
Recent theoretical work on the modeling of network structure has focused primarily on networks that are static and unchanging, but many real-world networks change their structure over time. There exist natural generalizations to the dynamic case of many static network models, including the classic random graph, the configuration model, and the stochastic block model, where one assumes that the appearance and disappearance of edges are governed by continuous-time Markov processes with rate parameters that can depend on properties of the nodes. Here we give an introduction to this class of models, showing for instance how one can compute their equilibrium properties. We also demonstrate their use in data analysis and statistical inference, giving efficient algorithms for fitting them to observed network data using the method of maximum likelihood. This allows us, for example, to estimate the time constants of network evolution or infer community structure from temporal network data using cues embedded both in the probabilities over time that node pairs are connected by edges and in the characteristic dynamics of edge appearance and disappearance. We illustrate these methods with a selection of applications, both to computer-generated test networks and real-world examples.
The Effects of Social Capital Levels in Elementary Schools on Organizational Information Sharing
ERIC Educational Resources Information Center
Ekinci, Abdurrahman
2012-01-01
This study aims to assess the effects of social capital levels at elementary schools on organizational information sharing as reported by teachers. Participants were 267 teachers selected randomly from 16 elementary schools; schools also selected randomly among 42 elementary schools located in the city center of Batman. The data were analyzed by…
ERIC Educational Resources Information Center
Rafferty, Karen; Watson, Patrice; Lappe, Joan M.
2011-01-01
Objective: To assess the impact of calcium-fortified food and dairy food on selected nutrient intakes in the diets of adolescent girls. Design: Randomized controlled trial, secondary analysis. Setting and Participants: Adolescent girls (n = 149) from a midwestern metropolitan area participated in randomized controlled trials of bone physiology…
ERIC Educational Resources Information Center
Thomas, Henry B.; Kaplan, E. Joseph
A national survey was conducted of randomly selected chief student personnel officers as listed in the 1979 "Education Directory of Colleges and Universities." The survey addressed specific institutional demographics, policy-making authority, reporting structure, and areas of responsibility of the administrators. Over 93 percent of the respondents…
Nonmanufacturing Businesses. U.S. Metric Study Interim Report.
ERIC Educational Resources Information Center
Cornog, June R.; Bunten, Elaine D.
In this fifth interim report on the feasibility of a United States changeover to a metric system stems from the U.S. Metric Study, a primary stratified sample of 2,828 nonmanufacturing firms was randomly selected from 28,184 businesses taken from Social Security files, a secondary sample of 2,258 firms was randomly selected for replacement…
ERIC Educational Resources Information Center
Juhasz, Stephen; And Others
Table of contents (TOC) practices of some 120 primary journals were analyzed. The journals were randomly selected. The method of randomization is described. The samples were selected from a university library with a holding of approximately 12,000 titles published worldwide. A questionnaire was designed. Purpose was to find uniformity and…
Molecular selection in a unified evolutionary sequence
NASA Technical Reports Server (NTRS)
Fox, S. W.
1986-01-01
With guidance from experiments and observations that indicate internally limited phenomena, an outline of unified evolutionary sequence is inferred. Such unification is not visible for a context of random matrix and random mutation. The sequence proceeds from Big Bang through prebiotic matter, protocells, through the evolving cell via molecular and natural selection, to mind, behavior, and society.
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
ERIC Educational Resources Information Center
Bibiso, Abyot; Olango, Menna; Bibiso, Mesfin
2017-01-01
The purpose of this study was to investigate the relationship between teacher's commitment and female students academic achievement in selected secondary school of Wolaita zone, Southern Ethiopia. The research method employed was survey study and the sampling techniques were purposive, simple random and stratified random sampling. Questionnaire…
ERIC Educational Resources Information Center
Martinez, John; Fraker, Thomas; Manno, Michelle; Baird, Peter; Mamun, Arif; O'Day, Bonnie; Rangarajan, Anu; Wittenburg, David
2010-01-01
This report focuses on the seven original Youth Transition Demonstration (YTD) projects selected for funding in 2003. Three of the original seven projects were selected for a national random assignment evaluation in 2005; however, this report only focuses on program operations prior to joining the random assignment evaluation for the three…
Waterbird nest-site selection is influenced by neighboring nests and island topography
Hartman, Christopher; Ackerman, Joshua T.; Takekawa, John Y.; Herzog, Mark
2016-01-01
Avian nest-site selection is influenced by factors operating across multiple spatial scales. Identifying preferred physical characteristics (e.g., topography, vegetation structure) can inform managers to improve nesting habitat suitability. However, social factors (e.g., attraction, territoriality, competition) can complicate understanding physical characteristics preferred by nesting birds. We simultaneously evaluated the physical characteristics and social factors influencing selection of island nest sites by colonial-nesting American avocets (Recurvirostra americana) and Forster's terns (Sterna forsteri) at 2 spatial scales in San Francisco Bay, 2011–2012. At the larger island plot (1 m2) scale, we used real-time kinematics to produce detailed topographies of nesting islands and map the distribution of nests. Nesting probability was greatest in island plots between 0.5 m and 1.5 m above the water surface, at distances <10 m from the water's edge, and of moderately steep (avocets) or flat (terns) slopes. Further, avocet and tern nesting probability increased as the number of nests initiated in adjacent plots increased up to a peak of 11–12 tern nests, and then decreased thereafter. Yet, avocets were less likely to nest in plots adjacent to plots with nesting avocets, suggesting an influence of intra-specific territoriality. At the smaller microhabitat scale, or the area immediately surrounding the nest, we compared topography, vegetation, and distance to nearest nest between nest sites and paired random sites. Topography had little influence on selection of the nest microhabitat. Instead, nest sites were more likely to have vegetation present, and greater cover, than random sites. Finally, avocet, and to a lesser extent tern, nest sites were closer to other active conspecific or heterospecific nests than random sites, indicating that social attraction played a role in selection of nest microhabitat. Our results demonstrate key differences in nest-site selection between co-occurring avocets and terns, and indicate the effects of physical characteristics and social factors on selection of nesting habitat are dependent on the spatial scale examined. Moreover, these results indicate that islands with abundant area between 0.5 m and 1.5 m above the water surface, within 10 m of the water's edge, and containing a mosaic of slopes ranging from flat to moderately steep would provide preferred nesting habitat for avocets and terns. © 2016 The Wildlife Society.
Töllner, Thomas; Conci, Markus; Müller, Hermann J
2015-03-01
It is well established that we can focally attend to a specific region in visual space without shifting our eyes, so as to extract action-relevant sensory information from covertly attended locations. The underlying mechanisms that determine how fast we engage our attentional spotlight in visual-search scenarios, however, remain controversial. One dominant view advocated by perceptual decision-making models holds that the times taken for focal-attentional selection are mediated by an internal template that biases perceptual coding and selection decisions exclusively through target-defining feature coding. This notion directly predicts that search times remain unaffected whether or not participants can anticipate the upcoming distractor context. Here we tested this hypothesis by employing an illusory-figure localization task that required participants to search for an invariant target amongst a variable distractor context, which gradually changed--either randomly or predictably--as a function of distractor-target similarity. We observed a graded decrease in internal focal-attentional selection times--correlated with external behavioral latencies--for distractor contexts of higher relative to lower similarity to the target. Critically, for low but not intermediate and high distractor-target similarity, these context-driven effects were cortically and behaviorally amplified when participants could reliably predict the type of distractors. This interactive pattern demonstrates that search guidance signals can integrate information about distractor, in addition to target, identities to optimize distractor-target competition for focal-attentional selection. © 2014 Wiley Periodicals, Inc.
Induced mood and selective attention.
Brand, N; Verspui, L; Oving, A
1997-04-01
Subjects (N = 60) were randomly assigned to an elated, depressed, or neutral mood-induction condition to assess the effect of mood state on cognitive functioning. In the elated condition film fragments expressing happiness and euphoria were shown. In the depressed condition some frightening and distressing film fragments were presented. The neutral group watched no film. Mood states were measured using the Profile of Mood States, and a Stroop task assessed selective attention. Both were presented by computer. The induction groups differed significantly in the expected direction on the mood subscales Anger, Tension, Depression, Vigour, and Fatigue, and also in the mean scale response times, i.e., slower responses for the depressed condition and faster for the elated one. Differences between conditions were found in the errors on the Stroop: in the depressed condition were the fewest errors and significantly longer error reaction times. Speed of error was associated with self-reported fatigue.
Arezi, Bahram; McKinney, Nancy; Hansen, Connie; Cayouette, Michelle; Fox, Jeffrey; Chen, Keith; Lapira, Jennifer; Hamilton, Sarah; Hogrefe, Holly
2014-01-01
Faster-cycling PCR formulations, protocols, and instruments have been developed to address the need for increased throughput and shorter turn-around times for PCR-based assays. Although run times can be cut by up to 50%, shorter cycle times have been correlated with lower detection sensitivity and increased variability. To address these concerns, we applied Compartmentalized Self Replication (CSR) to evolve faster-cycling mutants of Taq DNA polymerase. After five rounds of selection using progressively shorter PCR extension times, individual mutations identified in the fastest-cycling clones were randomly combined using ligation-based multi-site mutagenesis. The best-performing combinatorial mutants exhibit 35- to 90-fold higher affinity (lower Kd ) for primed template and a moderate (2-fold) increase in extension rate compared to wild-type Taq. Further characterization revealed that CSR-selected mutations provide increased resistance to inhibitors, and most notably, enable direct amplification from up to 65% whole blood. We discuss the contribution of individual mutations to fast-cycling and blood-resistant phenotypes.
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-01-01
Background Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. Methods We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. Application We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. Conclusion This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy. PMID:17543100
Vallée, Julie; Souris, Marc; Fournet, Florence; Bochaton, Audrey; Mobillion, Virginie; Peyronnie, Karine; Salem, Gérard
2007-06-01
Geographical objectives and probabilistic methods are difficult to reconcile in a unique health survey. Probabilistic methods focus on individuals to provide estimates of a variable's prevalence with a certain precision, while geographical approaches emphasise the selection of specific areas to study interactions between spatial characteristics and health outcomes. A sample selected from a small number of specific areas creates statistical challenges: the observations are not independent at the local level, and this results in poor statistical validity at the global level. Therefore, it is difficult to construct a sample that is appropriate for both geographical and probability methods. We used a two-stage selection procedure with a first non-random stage of selection of clusters. Instead of randomly selecting clusters, we deliberately chose a group of clusters, which as a whole would contain all the variation in health measures in the population. As there was no health information available before the survey, we selected a priori determinants that can influence the spatial homogeneity of the health characteristics. This method yields a distribution of variables in the sample that closely resembles that in the overall population, something that cannot be guaranteed with randomly-selected clusters, especially if the number of selected clusters is small. In this way, we were able to survey specific areas while minimising design effects and maximising statistical precision. We applied this strategy in a health survey carried out in Vientiane, Lao People's Democratic Republic. We selected well-known health determinants with unequal spatial distribution within the city: nationality and literacy. We deliberately selected a combination of clusters whose distribution of nationality and literacy is similar to the distribution in the general population. This paper describes the conceptual reasoning behind the construction of the survey sample and shows that it can be advantageous to choose clusters using reasoned hypotheses, based on both probability and geographical approaches, in contrast to a conventional, random cluster selection strategy.
Grady, S.J.; Casey, G.D.
2001-01-01
Data on volatile organic compounds (VOCs) in drinking water supplied by 2,110 randomly selected community water systems (CWSs) in 12 Northeast and Mid-Atlantic States indicate 64 VOC analytes were detected at least once during 1993-98. Selection of the 2,110 CWSs inventoried for this study targeted 20 percent of the 10,479 active CWSs in the region and represented a random subset of the total distribution by State, source of water, and size of system. The data include 21,635 analyses of drinking water collected for compliance monitoring under the Safe Drinking Water Act; the data mostly represent finished drinking water collected at the pointof- entry to, or at more distal locations within, each CWS?s distribution system following any watertreatment processes. VOC detections were more common in drinking water supplied by large systems (serving more than 3,300 people) that tap surface-water sources or both surface- and groundwater sources than in small systems supplied exclusively by ground-water sources. Trihalomethane (THM) compounds, which are potentially formed during the process of disinfecting drinking water with chlorine, were detected in 45 percent of the randomly selected CWSs. Chloroform was the most frequently detected THM, reported in 39 percent of the CWSs. The gasoline additive methyl tert-butyl ether (MTBE) was the most frequently detected VOC in drinking water after the THMs. MTBE was detected in 8.9 percent of the 1,194 randomly selected CWSs that analyzed samples for MTBE at any reporting level, and it was detected in 7.8 percent of the 1,074 CWSs that provided MTBE data at the 1.0-?g/L (microgram per liter) reporting level. As with other VOCs reported in drinking water, most MTBE concentrations were less than 5.0 ?g/L, and less than 1 percent of CWSs reported MTBE concentrations at or above the 20.0-?g/L lower limit recommended by the U.S. Environmental Protection Agency?s Drinking-Water Advisory. The frequency of MTBE detections in drinking water is significantly related to high- MTBE-use patterns. Detections are five times more likely in areas where MTBE is or has been used in gasoline at greater than 5 percent by volume as part of the oxygenated or reformulated (OXY/RFG) fuels program. Detection frequencies of the individual gasoline compounds (benzene, toluene, ethylbenzene, and xylenes (BTEX)) were mostly less than 3 percent of the randomly selected CWSs, but collectively, BTEX compounds were detected in 8.4 percent of CWSs. BTEX concentrations also were low and just three drinkingwater samples contained BTEX at concentrations exceeding 20 ?g/L. Co-occurrence of MTBE and BTEX was rare, and only 0.8 percent of CWSs reported simultaneous detections of MTBE and BTEX compounds. Low concentrations and cooccurrence of MTBE and BTEX indicate most gasoline contaminants in drinking water probably represent nonpoint sources. Solvents were frequently detected in drinking water in the 12-State area. One or more of 27 individual solvent VOCs were detected at any reporting level in 3,080 drinking-water samples from 304 randomly selected CWSs (14 percent) and in 206 CWSs (9.8 percent) at concentrations at or above 1.0 ?g/L. High co-occurrence among solvents probably reflects common sources and the presence of transformation by-products. Other VOCs were relatively rarely detected in drinking water in the 12-State area. Six percent (127) of the 2,110 randomly selected CWSs reported concentrations of 16 VOCs at or above drinking-water criteria. The 127 CWSs collectively serve 2.6 million people. The occurrence of VOCs in drinking water was significantly associated (p<0.0001) with high population- density urban areas. New Jersey, Massachusetts, and Rhode Island, States with substantial urbanization and high population density, had the highest frequency of VOC detections among the 12 States. More than two-thirds of the randomly selected CWSs in New Jersey reported detecting VOC concentrations in drinking water at or above 1
Su, Meng; Tan, Ya-Yun; Liu, Qing-Min; Ren, Yan-Jun; Kawachi, Ichiro; Li, Li-Ming; Lv, Jun
2014-09-01
Neighborhood built environment may influence residents' physical activity, which in turn, affects their health. This study aimed to determine the associations between perceived built environment and leisure-time physical activity in Hangzhou, China. 1440 participants aged 25-59 were randomly selected from 30 neighborhoods in three types of administrative planning units in Hangzhou. International Physical Activity Questionnaire long form and NEWS-A were used to obtain individual-level data. The China Urban Built Environment Scan Tool was used to objectively assess the neighborhood-level built environment. Multi-level regression was used to explore the relationship between perceived built environment variables and leisure-time physical activities. Data was collected in Hangzhou from June to December in 2012, and was analyzed in May 2013. Significant difference between neighborhood random variations in physical activity was identified (P=0.0134); neighborhood-level differences accounted for 3.0% of the variability in leisure-time physical activity. Male residents who perceived higher scores on access to physical activity destinations reported more involvement in leisure-time physical activity. Higher scores on perception of esthetic quality, and lower on residential density were associated with more time in leisure-time walking in women. The present study demonstrated that perceived urban built environment attributes significantly correlate with leisure-time physical activity in Hangzhou, China. Copyright © 2014. Published by Elsevier Inc.
Platz, T; Eickhof, C; van Kaick, S; Engel, U; Pinkowski, C; Kalok, S; Pause, M
2005-10-01
To study the effects of augmented exercise therapy time for arm rehabilitation as either Bobath therapy or the impairment-oriented training (Arm BASIS training) in stroke patients with arm severe paresis. Single blind, multicentre randomized control trial. Three inpatient neurorehabilitation centres. Sixty-two anterior circulation ischaemic stroke patients. Random assignment to three group: (A) no augmented exercise therapy time, (B) augmented exercise therapy time as Bobath therapy and (C) augmented exercise therapy time as Arm BASIS training. Fugl-Meyer arm motor score. Secondary measure: Action Research Arm Test (ARA). Ancillary measures: Fugl-Meyer arm sensation and joint motion/pain scores and the Ashworth Scale (elbow flexors). An overall effect of augmented exercise therapy time on Fugl-Meyer scores after four weeks was not corroborated (mean and 95% confidence interval (CI) of change scores: no augmented exercise therapy time (n=20) 8.8, 5.2-12.3; augmented exercise therapy time (n=40) 9.9, 6.8-13.9; p = 0.2657). The group who received the augmented exercise therapy time as Arm BASIS training (n=20) had, however, higher gains than the group receiving the augmented exercise therapy time as Bobath therapy (n=20) (mean and 95% CI of change scores: Bobath 7.2, 2.6-11.8; BASIS 12.6, 8.4-16.8; p = 0.0432). Passive joint motion/pain deteriorated less in the group who received BASIS training (mean and 95% CI of change scores: Bobath -3.2, -5.2 to -1.1; BASIS 0.1, -1.8-2.0; p = 0.0090). ARA, Fugl-Meyer arm sensation, and Ashworth Scale scores were not differentially affected. The augmented exercise therapy time as Arm BASIS training enhanced selective motor control. Type of training was more relevant for recovery of motor control than therapeutic time spent.
Take care! The evaluation of a team-based burnout intervention program for oncology care providers.
Le Blanc, Pascale M; Hox, Joop J; Schaufeli, Wilmar B; Taris, Toon W; Peeters, Maria C W
2007-01-01
In this quasi-experimental study among staff of 29 oncology wards, the authors evaluated the effects of a team-based burnout intervention program combining a staff support group with a participatory action research approach. Nine wards were randomly selected to participate in the program. Before the program started (Time 1), directly after the program ended (Time 2), and 6 months later (Time 3), study participants filled out a questionnaire on their work situation and well-being. Results of multilevel analyses showed that staff in the experimental wards experienced significantly less emotional exhaustion at both Time 2 and Time 3 and less depersonalization at Time 2, compared with the control wards. Moreover, changes in burnout levels were significantly related to changes in the perception of job characteristics over time. 2007 APA, all rights reserved
Brunetti, Natale Daniele; De Gennaro, Luisa; Correale, Michele; Santoro, Francesco; Caldarola, Pasquale; Gaglione, Antonio; Di Biase, Matteo
2017-04-01
A shorter time to treatment has been shown to be associated with lower mortality rates in acute myocardial infarction (AMI). Several strategies have been adopted with the aim to reduce any delay in diagnosis of AMI: pre-hospital triage with telemedicine is one of such strategies. We therefore aimed to measure the real effect of pre-hospital triage with telemedicine in case of AMI in a meta-analysis study. We performed a meta-analysis of non-randomized studies with the aim to quantify the exact reduction of time to treatment achieved by pre-hospital triage with telemedicine. Data were pooled and compared by relative time reduction and 95% C.I.s. A meta-regression analysis was performed in order to find possible predictors of shorter time to treatment. Eleven studies were selected and finally evaluated in the study. The overall relative reduction of time to treatment with pre-hospital triage and telemedicine was -38/-40% (p<0.001). Absolute time reduction was significantly correlated to time to treatment in the control groups (p<0.001), while relative time reduction was independent. A non-significant trend toward shorter relative time reductions was observed over years. Pre-hospital triage with telemedicine is associated with a near halved time to treatment in AMI. The benefit is larger in terms of absolute time to treatment reduction in populations with larger delays to treatment. Copyright © 2017 Elsevier B.V. All rights reserved.
Dong, Qi; Elliott, Michael R; Raghunathan, Trivellore E
2014-06-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs.
Dong, Qi; Elliott, Michael R.; Raghunathan, Trivellore E.
2017-01-01
Outside of the survey sampling literature, samples are often assumed to be generated by a simple random sampling process that produces independent and identically distributed (IID) samples. Many statistical methods are developed largely in this IID world. Application of these methods to data from complex sample surveys without making allowance for the survey design features can lead to erroneous inferences. Hence, much time and effort have been devoted to develop the statistical methods to analyze complex survey data and account for the sample design. This issue is particularly important when generating synthetic populations using finite population Bayesian inference, as is often done in missing data or disclosure risk settings, or when combining data from multiple surveys. By extending previous work in finite population Bayesian bootstrap literature, we propose a method to generate synthetic populations from a posterior predictive distribution in a fashion inverts the complex sampling design features and generates simple random samples from a superpopulation point of view, making adjustment on the complex data so that they can be analyzed as simple random samples. We consider a simulation study with a stratified, clustered unequal-probability of selection sample design, and use the proposed nonparametric method to generate synthetic populations for the 2006 National Health Interview Survey (NHIS), and the Medical Expenditure Panel Survey (MEPS), which are stratified, clustered unequal-probability of selection sample designs. PMID:29200608
Li, Lingling; Kulldorff, Martin; Russek-Cohen, Estelle; Kawai, Alison Tse; Hua, Wei
2015-12-01
The self-controlled risk interval design is commonly used to assess the association between an acute exposure and an adverse event of interest, implicitly adjusting for fixed, non-time-varying covariates. Explicit adjustment needs to be made for time-varying covariates, for example, age in young children. It can be performed via either a fixed or random adjustment. The random-adjustment approach can provide valid point and interval estimates but requires access to individual-level data for an unexposed baseline sample. The fixed-adjustment approach does not have this requirement and will provide a valid point estimate but may underestimate the variance. We conducted a comprehensive simulation study to evaluate their performance. We designed the simulation study using empirical data from the Food and Drug Administration-sponsored Mini-Sentinel Post-licensure Rapid Immunization Safety Monitoring Rotavirus Vaccines and Intussusception study in children 5-36.9 weeks of age. The time-varying confounder is age. We considered a variety of design parameters including sample size, relative risk, time-varying baseline risks, and risk interval length. The random-adjustment approach has very good performance in almost all considered settings. The fixed-adjustment approach can be used as a good alternative when the number of events used to estimate the time-varying baseline risks is at least the number of events used to estimate the relative risk, which is almost always the case. We successfully identified settings in which the fixed-adjustment approach can be used as a good alternative and provided guidelines on the selection and implementation of appropriate analyses for the self-controlled risk interval design. Copyright © 2015 John Wiley & Sons, Ltd.
Austin, Peter C
2014-03-30
Propensity score methods are increasingly being used to estimate causal treatment effects in observational studies. In medical and epidemiological studies, outcomes are frequently time-to-event in nature. Propensity-score methods are often applied incorrectly when estimating the effect of treatment on time-to-event outcomes. This article describes how two different propensity score methods (matching and inverse probability of treatment weighting) can be used to estimate the measures of effect that are frequently reported in randomized controlled trials: (i) marginal survival curves, which describe survival in the population if all subjects were treated or if all subjects were untreated; and (ii) marginal hazard ratios. The use of these propensity score methods allows one to replicate the measures of effect that are commonly reported in randomized controlled trials with time-to-event outcomes: both absolute and relative reductions in the probability of an event occurring can be determined. We also provide guidance on variable selection for the propensity score model, highlight methods for assessing the balance of baseline covariates between treated and untreated subjects, and describe the implementation of a sensitivity analysis to assess the effect of unmeasured confounding variables on the estimated treatment effect when outcomes are time-to-event in nature. The methods in the paper are illustrated by estimating the effect of discharge statin prescribing on the risk of death in a sample of patients hospitalized with acute myocardial infarction. In this tutorial article, we describe and illustrate all the steps necessary to conduct a comprehensive analysis of the effect of treatment on time-to-event outcomes. © 2013 The authors. Statistics in Medicine published by John Wiley & Sons, Ltd.
2004-03-01
definition efficiency is the amount of the time that the processing element is gainfully employed , which is calculated by using the ratio of the... employs an interest- ing form of tournament selection called Pareto domination tournaments. Two members of the population are chosen at random and they...it has a set of solutions and using a template for each solution is not feasible. So the MOMGA employs a different competitive template during the
America Goes to War: Managing the Force During Times of Stress and Uncertainty
2007-01-01
among our young people . They recognize the draft as an infringement on their liberty, which it is. To them, it represents a government...attested to its unpopularity. In the most perverse way, the draft was effective in the North, not because it brought in large numbers of people , but...When the cause did not enjoy the full support of the people , as in Vietnam, or the selection appeared to be random or biased with
Simulating the Spread of an Outbreak of Foot and Mouth Disease in California
2012-06-01
the disease, but it debilitates them; leading to severely decreased meat and milk production. The economic impact on a country with an FMD...of an infected animal. Most adult animals recover from the disease, but it debilitates them, which leads to severely decreased meat and milk ...the model, we include one randomly selected cattle premise from Central California without a time until detection in this file. This premise is a Cow
Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.
2000-01-01
Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce ‘machine-z’, a redshift prediction algorithm and a ‘high-z’ classifier for Swift GRBs based on machine learning. Our method relies exclusively onmore » canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve ~100 per cent recall. As a result, the most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.« less
Machine-z: Rapid Machine-Learned Redshift Indicator for Swift Gamma-Ray Bursts
NASA Technical Reports Server (NTRS)
Ukwatta, T. N.; Wozniak, P. R.; Gehrels, N.
2016-01-01
Studies of high-redshift gamma-ray bursts (GRBs) provide important information about the early Universe such as the rates of stellar collapsars and mergers, the metallicity content, constraints on the re-ionization period, and probes of the Hubble expansion. Rapid selection of high-z candidates from GRB samples reported in real time by dedicated space missions such as Swift is the key to identifying the most distant bursts before the optical afterglow becomes too dim to warrant a good spectrum. Here, we introduce 'machine-z', a redshift prediction algorithm and a 'high-z' classifier for Swift GRBs based on machine learning. Our method relies exclusively on canonical data commonly available within the first few hours after the GRB trigger. Using a sample of 284 bursts with measured redshifts, we trained a randomized ensemble of decision trees (random forest) to perform both regression and classification. Cross-validated performance studies show that the correlation coefficient between machine-z predictions and the true redshift is nearly 0.6. At the same time, our high-z classifier can achieve 80 per cent recall of true high-redshift bursts, while incurring a false positive rate of 20 per cent. With 40 per cent false positive rate the classifier can achieve approximately 100 per cent recall. The most reliable selection of high-redshift GRBs is obtained by combining predictions from both the high-z classifier and the machine-z regressor.
Chen, Li-Sheng; Yen, Amy Ming-Fang; Duffy, Stephen W; Tabar, Laszlo; Lin, Wen-Chou; Chen, Hsiu-Hsi
2010-10-01
Population-based routine service screening has gained popularity following an era of randomized controlled trials. The evaluation of these service screening programs is subject to study design, data availability, and the precise data analysis for adjusting bias. We developed a computer-aided system that allows the evaluation of population-based service screening to unify these aspects and facilitate and guide the program assessor to efficiently perform an evaluation. This system underpins two experimental designs: the posttest-only non-equivalent design and the one-group pretest-posttest design and demonstrates the type of data required at both the population and individual levels. Three major analyses were developed that included a cumulative mortality analysis, survival analysis with lead-time adjustment, and self-selection bias adjustment. We used SAS AF software to develop a graphic interface system with a pull-down menu style. We demonstrate the application of this system with data obtained from a Swedish population-based service screen and a population-based randomized controlled trial for the screening of breast, colorectal, and prostate cancer, and one service screening program for cervical cancer with Pap smears. The system provided automated descriptive results based on the various sources of available data and cumulative mortality curves corresponding to the study designs. The comparison of cumulative survival between clinically and screen-detected cases without a lead-time adjustment are also demonstrated. The intention-to-treat and noncompliance analysis with self-selection bias adjustments are also shown to assess the effectiveness of the population-based service screening program. Model validation was composed of a comparison between our adjusted self-selection bias estimates and the empirical results on effectiveness reported in the literature. We demonstrate a computer-aided system allowing the evaluation of population-based service screening programs with an adjustment for self-selection and lead-time bias. This is achieved by providing a tutorial guide from the study design to the data analysis, with bias adjustment. Copyright © 2010 Elsevier Inc. All rights reserved.
Aziz, Zoriah; Abdul Rasool Hassan, Bassam
2017-02-01
Evidence from animal studies and trials suggests that honey may accelerate wound healing. The objective of this review was to assess the effects of honey compared with silver dressings on the healing of burn wounds. Relevant databases for randomized controlled trials (RCTs) of honey compared with silver sulfadiazine (SSD) were searched. The quality of the selected trials was assessed using the Cochrane Risk of Bias Assessment Tool. The primary endpoints considered were wound healing time and the number of infected wounds rendered sterile. Nine RCTs met the inclusion criteria. Based on moderate quality evidence there was a statistically significant difference between the two groups, favoring honey in healing time (MD -5.76days, 95% CI -8.14 to -3.39) and the proportions of infected wounds rendered sterile (RR 2.59; 95% CI 1.58-2.88). The available evidence suggests that honey dressings promote better wound healing than silver sulfadiazine for burns. Copyright © 2016 Elsevier Ltd and ISBI. All rights reserved.
Developing a national role description for medical directors in long-term care
Rahim-Jamal, Sherin; Quail, Patrick; Bhaloo, Tajudaullah
2010-01-01
OBJECTIVE To develop a national role description for medical directors in long-term care (LTC) based on role functions drawn from the literature and the LTC industry. DESIGN A questionnaire about the role functions identified from the literature was mailed or e-mailed to randomly selected medical directors, directors of care or nursing (DOCs), and administrators in LTC facilities. SETTING Long-term care facilities in Canada randomly selected from regional clusters. PARTICIPANTS Medical directors, DOCs, and administrators in LTC facilities; a national advisory group of medical directors from the Long Term Care Medical Directors Association of Canada; and a volunteer group of medical directors. MAIN OUTCOME MEASURES Respondents were asked to indicate, from the list of identified functions, 1) whether medical directors spent any time on each activity; 2) whether medical directors should spend time on each activity; and 3) if medical directors should spend time on an activity, whether the activity was “essential” or “desirable.” RESULTS An overall response rate of 37% was obtained. At least 80% of the respondents from all 3 groups (medical directors, DOCs, and administrators) highlighted 24 functions they deemed to be “essential” or “desirable,” which were then included in the role description. In addition, the advisory group expanded the role description to include 5 additional responsibilities from the remaining 18 functions originally identified. A volunteer group of medical directors confirmed the resulting role description. CONCLUSION The role description developed as a result of this study brings clarity to the medical director’s role in Canadian LTC facilities; the functions outlined are considered important for medical directors to undertake. The role description could be a useful tool in negotiations pertaining to time commitment and expectations of a medical director and fair compensation for services rendered. PMID:20090058
Rahim-Jamal, Sherin; Quail, Patrick; Bhaloo, Tajudaullah
2010-01-01
To develop a national role description for medical directors in long-term care (LTC) based on role functions drawn from the literature and the LTC industry. A questionnaire about the role functions identified from the literature was mailed or e-mailed to randomly selected medical directors, directors of care or nursing (DOCs), and administrators in LTC facilities. Long-term care facilities in Canada randomly selected from regional clusters. Medical directors, DOCs, and administrators in LTC facilities; a national advisory group of medical directors from the Long Term Care Medical Directors Association of Canada; and a volunteer group of medical directors. Respondents were asked to indicate, from the list of identified functions, 1) whether medical directors spent any time on each activity; 2) whether medical directors should spend time on each activity; and 3) if medical directors should spend time on an activity, whether the activity was "essential" or "desirable." An overall response rate of 37% was obtained. At least 80% of the respondents from all 3 groups (medical directors, DOCs, and administrators) highlighted 24 functions they deemed to be "essential" or "desirable," which were then included in the role description. In addition, the advisory group expanded the role description to include 5 additional responsibilities from the remaining 18 functions originally identified. A volunteer group of medical directors confirmed the resulting role description. The role description developed as a result of this study brings clarity to the medical director's role in Canadian LTC facilities; the functions outlined are considered important for medical directors to undertake. The role description could be a useful tool in negotiations pertaining to time commitment and expectations of a medical director and fair compensation for services rendered.
NASA Astrophysics Data System (ADS)
Tsao, Shih-Ming; Lai, Ji-Ching; Horng, Horng-Er; Liu, Tu-Chen; Hong, Chin-Yih
2017-04-01
Aptamers are oligonucleotides that can bind to specific target molecules. Most aptamers are generated using random libraries in the standard systematic evolution of ligands by exponential enrichment (SELEX). Each random library contains oligonucleotides with a randomized central region and two fixed primer regions at both ends. The fixed primer regions are necessary for amplifying target-bound sequences by PCR. However, these extra-sequences may cause non-specific bindings, which potentially interfere with good binding for random sequences. The Magnetic-Assisted Rapid Aptamer Selection (MARAS) is a newly developed protocol for generating single-strand DNA aptamers. No repeat selection cycle is required in the protocol. This study proposes and demonstrates a method to isolate aptamers for C-reactive proteins (CRP) from a randomized ssDNA library containing no fixed sequences at 5‧ and 3‧ termini using the MARAS platform. Furthermore, the isolated primer-free aptamer was sequenced and binding affinity for CRP was analyzed. The specificity of the obtained aptamer was validated using blind serum samples. The result was consistent with monoclonal antibody-based nephelometry analysis, which indicated that a primer-free aptamer has high specificity toward targets. MARAS is a feasible platform for efficiently generating primer-free aptamers for clinical diagnoses.
DeLay, Dawn; Ha, Thao; Van Ryzin, Mark; Winter, Charlotte; Dishion, Thomas J.
2015-01-01
Adolescent friendships that promote problem behavior are often chosen in middle school. The current study examines the unintended impact of a randomized school based intervention on the selection of friends in middle school, as well as on observations of deviant talk with friends five years later. Participants included 998 middle school students (526 boys and 472 girls) recruited at the onset of middle school (age 11-12 years) from three public middle schools participating in the Family Check-up model intervention. The current study focuses only on the effects of the SHAPe curriculum—one level of the Family Check-up model—on friendship choices. Participants nominated friends and completed measures of deviant peer affiliation. Approximately half of the sample (n=500) was randomly assigned to the intervention and the other half (n=498) comprised the control group within each school. The results indicate that the SHAPe curriculum affected friend selection within School 1, but not within Schools 2 or 3. The effects of friend selection in School 1 translated into reductions in observed deviancy training five years later (age 16-17 years). By coupling longitudinal social network analysis with a randomized intervention study the current findings provide initial evidence that a randomized public middle school intervention can disrupt the formation of deviant peer groups and diminish levels of adolescent deviance five years later. PMID:26377235
Wyman, Peter A; Henry, David; Knoblauch, Shannon; Brown, C Hendricks
2015-10-01
The dynamic wait-listed design (DWLD) and regression point displacement design (RPDD) address several challenges in evaluating group-based interventions when there is a limited number of groups. Both DWLD and RPDD utilize efficiencies that increase statistical power and can enhance balance between community needs and research priorities. The DWLD blocks on more time units than traditional wait-listed designs, thereby increasing the proportion of a study period during which intervention and control conditions can be compared, and can also improve logistics of implementing intervention across multiple sites and strengthen fidelity. We discuss DWLDs in the larger context of roll-out randomized designs and compare it with its cousin the Stepped Wedge design. The RPDD uses archival data on the population of settings from which intervention unit(s) are selected to create expected posttest scores for units receiving intervention, to which actual posttest scores are compared. High pretest-posttest correlations give the RPDD statistical power for assessing intervention impact even when one or a few settings receive intervention. RPDD works best when archival data are available over a number of years prior to and following intervention. If intervention units were not randomly selected, propensity scores can be used to control for non-random selection factors. Examples are provided of the DWLD and RPDD used to evaluate, respectively, suicide prevention training (QPR) in 32 schools and a violence prevention program (CeaseFire) in two Chicago police districts over a 10-year period. How DWLD and RPDD address common threats to internal and external validity, as well as their limitations, are discussed.
Kaye, T.N.; Pyke, David A.
2003-01-01
Population viability analysis is an important tool for conservation biologists, and matrix models that incorporate stochasticity are commonly used for this purpose. However, stochastic simulations may require assumptions about the distribution of matrix parameters, and modelers often select a statistical distribution that seems reasonable without sufficient data to test its fit. We used data from long-term (5a??10 year) studies with 27 populations of five perennial plant species to compare seven methods of incorporating environmental stochasticity. We estimated stochastic population growth rate (a measure of viability) using a matrix-selection method, in which whole observed matrices were selected at random at each time step of the model. In addition, we drew matrix elements (transition probabilities) at random using various statistical distributions: beta, truncated-gamma, truncated-normal, triangular, uniform, or discontinuous/observed. Recruitment rates were held constant at their observed mean values. Two methods of constraining stage-specific survival to a??100% were also compared. Different methods of incorporating stochasticity and constraining matrix column sums interacted in their effects and resulted in different estimates of stochastic growth rate (differing by up to 16%). Modelers should be aware that when constraining stage-specific survival to 100%, different methods may introduce different levels of bias in transition element means, and when this happens, different distributions for generating random transition elements may result in different viability estimates. There was no species effect on the results and the growth rates derived from all methods were highly correlated with one another. We conclude that the absolute value of population viability estimates is sensitive to model assumptions, but the relative ranking of populations (and management treatments) is robust. Furthermore, these results are applicable to a range of perennial plants and possibly other life histories.
Koteja, P; Swallow, J G; Carter, P A; Garland, T
1999-01-01
Laboratory house mice (Mus domesticus) that had experienced 10 generations of artificial selection for high levels of voluntary wheel running ran about 70% more total revolutions per day than did mice from random-bred control lines. The difference resulted primarily from increased average velocities rather than from increased time spent running. Within all eight lines (four selected, four control), females ran more than males. Average daily running distances ranged from 4.4 km in control males to 11.6 km in selected females. Whole-animal food consumption was statistically indistinguishable in the selected and control lines. However, mice from selected lines averaged approximately 10% smaller in body mass, and mass-adjusted food consumption was 4% higher in selected lines than in controls. The incremental cost of locomotion (grams food/revolution), computed as the partial regression slope of food consumption on revolutions run per day, did not differ between selected and control mice. On a 24-h basis, the total incremental cost of running (covering a distance) amounted to only 4.4% of food consumption in the control lines and 7.5% in the selected ones. However, the daily incremental cost of time active is higher (15.4% and 13.1% of total food consumption in selected and control lines, respectively). If wheel running in the selected lines continues to increase mainly by increases in velocity, then constraints related to energy acquisition are unlikely to be an important factor limiting further selective gain. More generally, our results suggest that, in small mammals, a substantial evolutionary increase in daily movement distances can be achieved by increasing running speed, without remarkable increases in total energy expenditure.
NASA Astrophysics Data System (ADS)
Wang, Hui; Liu, Jin; Li, Yanhong; Zhu, Xiaowen; Liu, Zhigang
2014-03-01
In the present study, the effect of one-generation divergent selection on the growth and survival of the bay scallop ( Argopecten irradians concentricus) was examined to evaluate the efficacy of a selection program currently being carried out in Beibu Bay in the South China Sea. A total of 146 adult scallops were randomly selected from the same cultured population of A. i. concentricus, and divided into two groups in shell length (anterior-posterior measurement): large (4.91-6.02 cm, n=74) and small (3.31-4.18 cm, n=72). At the same time, a control group was also randomly sampled (4.21-4.88 cm, n =80). Mass-spawned F 1 progenies from the three size groups were obtained and reared under identical conditions at all growth phases. The effects of two-way (or upward-downward) selection on fertilization rate, hatching rate, survival rate, daily growth in shell length and body weight were assessed in the three size groups. Results show that significant differences ( P<0.01) were found in hatching rate, survival rate and daily growth of F1 progenies, but not in fertilization rate ( P>0.05), among the three groups. The hatching rate, survival rate and daily growth of the progeny of large-sized parents were greater than those of the control group ( P<0.05), which in turn were larger than those of small-sized group ( P<0.05). Responses to selection by shell length and body weight were 0.32 ± 0.04 cm and 2.18 ± 0.05 g, respectively, for the upward selection, and -0.14 ± 0.03 cm and -2.77 ± 0.06 g, respectively, for the downward selection. The realized heritability estimates of shell length and body weight were 0.38 ± 0.06 cm and 0.22 ± 0.07 g for the upward selection, and 0.24 ± 0.06 cm and 0.37 ± 0.09 g for the downward selection, respectively. The change in growth by bidirectional selection suggests that high genetic variation may be present in the cultured bay scallop population in China.
A management-oriented classification of pinyon-juniper woodlands of the Great Basin
Neil E. West; Robin J. Tausch; Paul T. Tueller
1998-01-01
A hierarchical framework for the classification of Great Basin pinyon-juniper woodlands was based on a systematic sample of 426 stands from a random selection of 66 of the 110 mountain ranges in the region. That is, mountain ranges were randomly selected, but stands were systematically located on mountain ranges. The National Hierarchical Framework of Ecological Units...
School Happiness and School Success: An Investigation across Multiple Grade Levels.
ERIC Educational Resources Information Center
Parish, Joycelyn Gay; Parish, Thomas S.; Batt, Steve
A total of 572 randomly selected sixth-grade students and 908 randomly selected ninth-grade students from a large metropolitan school district in the Midwest were asked to complete a series of survey questions designed to measure the extent to which they were happy while at school, as well as questions concerning the extent to which they treated…
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 31 2014-07-01 2014-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2013 CFR
2013-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 32 2013-07-01 2013-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2011 CFR
2011-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 32 2012-07-01 2012-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2010 CFR
2010-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2014 CFR
2014-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
40 CFR 761.306 - Sampling 1 meter square surfaces by random selection of halves.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Sampling 1 meter square surfaces by...(b)(3) § 761.306 Sampling 1 meter square surfaces by random selection of halves. (a) Divide each 1 meter square portion where it is necessary to collect a surface wipe test sample into two equal (or as...
40 CFR 761.308 - Sample selection by random number generation on any two-dimensional square grid.
Code of Federal Regulations, 2012 CFR
2012-07-01
... generation on any two-dimensional square grid. 761.308 Section 761.308 Protection of Environment... § 761.79(b)(3) § 761.308 Sample selection by random number generation on any two-dimensional square grid. (a) Divide the surface area of the non-porous surface into rectangular or square areas having a...
Attitude and Motivation as Predictors of Academic Achievement of Students in Clothing and Textiles
ERIC Educational Resources Information Center
Uwameiye, B. E.; Osho, L. E.
2011-01-01
This study investigated attitude and motivation as predictors of academic achievement of students in clothing and textiles. Three colleges of education in Edo and Delta States were randomly selected for use in this study. From each school, 40 students were selected from Year III using simple random technique yielding a total of 240 students. The…
Fidelity decay in interacting two-level boson systems: Freezing and revivals
NASA Astrophysics Data System (ADS)
Benet, Luis; Hernández-Quiroz, Saúl; Seligman, Thomas H.
2011-05-01
We study the fidelity decay in the k-body embedded ensembles of random matrices for bosons distributed in two single-particle states, considering the reference or unperturbed Hamiltonian as the one-body terms and the diagonal part of the k-body embedded ensemble of random matrices and the perturbation as the residual off-diagonal part of the interaction. We calculate the ensemble-averaged fidelity with respect to an initial random state within linear response theory to second order on the perturbation strength and demonstrate that it displays the freeze of the fidelity. During the freeze, the average fidelity exhibits periodic revivals at integer values of the Heisenberg time tH. By selecting specific k-body terms of the residual interaction, we find that the periodicity of the revivals during the freeze of fidelity is an integer fraction of tH, thus relating the period of the revivals with the range of the interaction k of the perturbing terms. Numerical calculations confirm the analytical results.