ERIC Educational Resources Information Center
Livingston, Samuel A.; Kim, Sooyeon
2010-01-01
A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…
Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...
A formal and data-based comparison of measures of motor-equivalent covariation.
Verrel, Julius
2011-09-15
Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Hewitt, Nicole M.
2010-01-01
This study employed a quasi-experimental non-equivalent control group design with pretest and posttest. Two waves of data were collected from a non-random sample of 180 human service professionals in Western and Central Pennsylvania using two research instruments: the Social Work Empowerment Scale and the Conditions of Work Effectiveness-II Scale.…
Code of Federal Regulations, 2010 CFR
2010-07-01
... used to support the life safety equivalency evaluation? Analytical and empirical tools, including fire models and grading schedules such as the Fire Safety Evaluation System (Alternative Approaches to Life... empirical tools should be used to support the life safety equivalency evaluation? 102-80.120 Section 102-80...
Code of Federal Regulations, 2011 CFR
2011-01-01
... used to support the life safety equivalency evaluation? Analytical and empirical tools, including fire models and grading schedules such as the Fire Safety Evaluation System (Alternative Approaches to Life... empirical tools should be used to support the life safety equivalency evaluation? 102-80.120 Section 102-80...
Schmidt, Frank L; Le, Huy; Ilies, Remus
2003-06-01
On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.
Damuth, John
2007-05-01
Across a wide array of animal species, mean population densities decline with species body mass such that the rate of energy use of local populations is approximately independent of body size. This "energetic equivalence" is particularly evident when ecological population densities are plotted across several or more orders of magnitude in body mass and is supported by a considerable body of evidence. Nevertheless, interpretation of the data has remained controversial, largely because of the difficulty of explaining the origin and maintenance of such a size-abundance relationship in terms of purely ecological processes. Here I describe results of a simulation model suggesting that an extremely simple mechanism operating over evolutionary time can explain the major features of the empirical data. The model specifies only the size scaling of metabolism and a process where randomly chosen species evolve to take resource energy from other species. This process of energy exchange among particular species is distinct from a random walk of species abundances and creates a situation in which species populations using relatively low amounts of energy at any body size have an elevated extinction risk. Selective extinction of such species rapidly drives size-abundance allometry in faunas toward approximate energetic equivalence and maintains it there.
Code of Federal Regulations, 2014 CFR
2014-01-01
...) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire... used to support the life safety equivalency evaluation? Analytical and empirical tools, including fire models and grading schedules such as the Fire Safety Evaluation System (Alternative Approaches to Life...
Code of Federal Regulations, 2013 CFR
2013-07-01
...) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire... used to support the life safety equivalency evaluation? Analytical and empirical tools, including fire models and grading schedules such as the Fire Safety Evaluation System (Alternative Approaches to Life...
Code of Federal Regulations, 2012 CFR
2012-01-01
...) FEDERAL MANAGEMENT REGULATION REAL PROPERTY 80-SAFETY AND ENVIRONMENTAL MANAGEMENT Accident and Fire... used to support the life safety equivalency evaluation? Analytical and empirical tools, including fire models and grading schedules such as the Fire Safety Evaluation System (Alternative Approaches to Life...
An empirical assessment of taxic paleobiology.
Adrain, J M; Westrop, S R
2000-07-07
The analysis of major changes in faunal diversity through time is a central theme of analytical paleobiology. The most important sources of data are literature-based compilations of stratigraphic ranges of fossil taxa. The levels of error in these compilations and the possible effects of such error have often been discussed but never directly assessed. We compared our comprehensive database of trilobites to the equivalent portion of J. J. Sepkoski Jr.'s widely used global genus database. More than 70% of entries in the global database are inaccurate; however, as predicted, the error is randomly distributed and does not introduce bias.
Progress in radar snow research. [Brookings, South Dakota
NASA Technical Reports Server (NTRS)
Stiles, W. H.; Ulaby, F. T.; Fung, A. K.; Aslam, A.
1981-01-01
Multifrequency measurements of the radar backscatter from snow-covered terrain were made at several sites in Brookings, South Dakota, during the month of March of 1979. The data are used to examine the response of the scattering coefficient to the following parameters: (1) snow surface roughness, (2) snow liquid water content, and (3) snow water equivalent. The results indicate that the scattering coefficient is insensitive to snow surface roughness if the snow is drv. For wet snow, however, surface roughness can have a strong influence on the magnitude of the scattering coefficient. These observations confirm the results predicted by a theoretical model that describes the snow as a volume of Rayleig scatterers, bounded by a Gaussian random surface. In addition, empirical models were developed to relate the scattering coefficient to snow liquid water content and the dependence of the scattering coefficient on water equivalent was evaluated for both wet and dry snow conditions.
ERIC Educational Resources Information Center
Green, Francis; Vignoles, Anna
2012-01-01
We present a method to compare different qualifications for entry to higher education by studying students' subsequent performance. Using this method for students holding either the International Baccalaureate (IB) or A-levels gaining their degrees in 2010, we estimate an "empirical" equivalence scale between IB grade points and UCAS…
Efficient prediction designs for random fields.
Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut
2015-03-01
For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.
Bias-dependent hybrid PKI empirical-neural model of microwave FETs
NASA Astrophysics Data System (ADS)
Marinković, Zlatica; Pronić-Rančić, Olivera; Marković, Vera
2011-10-01
Empirical models of microwave transistors based on an equivalent circuit are valid for only one bias point. Bias-dependent analysis requires repeated extractions of the model parameters for each bias point. In order to make model bias-dependent, a new hybrid empirical-neural model of microwave field-effect transistors is proposed in this article. The model is a combination of an equivalent circuit model including noise developed for one bias point and two prior knowledge input artificial neural networks (PKI ANNs) aimed at introducing bias dependency of scattering (S) and noise parameters, respectively. The prior knowledge of the proposed ANNs involves the values of the S- and noise parameters obtained by the empirical model. The proposed hybrid model is valid in the whole range of bias conditions. Moreover, the proposed model provides better accuracy than the empirical model, which is illustrated by an appropriate modelling example of a pseudomorphic high-electron mobility transistor device.
NASA Technical Reports Server (NTRS)
Regian, J. Wesley; Shebilske, Wayne; Monk, John M.
1993-01-01
We explored the training potential of Virtual Reality (VR) technology. Thirty-one adults were trained and tested on spatial skills in a VR. They learned a sequence of button and knob responses on a VR console and performed flawlessly on the same console. Half were trained with a rote strategy; the rest used a meaningful strategy. Response times were equivalent for both groups and decreased significantly over five test trials indicating that learning continued on VR tests. The same subjects practiced navigating through a VR building, which had three floors with four rooms on each floor. The dependent measure was the number of rooms traversed on routes that differed from training routes. Many subjects completed tests in the fewest rooms possible. All subjects learned configurational knowledge according to the criterion of taking paths that were significantly shorter than those predicted by a random walk as determined by a Monte Carlo analysis. The results were discussed as a departure point for empirically testing the training potential of VR technology.
ERIC Educational Resources Information Center
Liao, Chi-Wen; Livingston, Samuel A.
2008-01-01
Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…
Exploring Equivalent Forms Reliability Using a Key Stage 2 Reading Test
ERIC Educational Resources Information Center
Benton, Tom
2013-01-01
This article outlines an empirical investigation into equivalent forms reliability using a case study of a national curriculum reading test. Within the situation being studied, there has been a genuine attempt to create several equivalent forms and so it is of interest to compare the actual behaviour of the relationship between these forms to the…
Towards dropout training for convolutional neural networks.
Wu, Haibing; Gu, Xiaodong
2015-11-01
Recently, dropout has seen increasing use in deep learning. For deep convolutional neural networks, dropout is known to work well in fully-connected layers. However, its effect in convolutional and pooling layers is still not clear. This paper demonstrates that max-pooling dropout is equivalent to randomly picking activation based on a multinomial distribution at training time. In light of this insight, we advocate employing our proposed probabilistic weighted pooling, instead of commonly used max-pooling, to act as model averaging at test time. Empirical evidence validates the superiority of probabilistic weighted pooling. We also empirically show that the effect of convolutional dropout is not trivial, despite the dramatically reduced possibility of over-fitting due to the convolutional architecture. Elaborately designing dropout training simultaneously in max-pooling and fully-connected layers, we achieve state-of-the-art performance on MNIST, and very competitive results on CIFAR-10 and CIFAR-100, relative to other approaches without data augmentation. Finally, we compare max-pooling dropout and stochastic pooling, both of which introduce stochasticity based on multinomial distributions at pooling stage. Copyright © 2015 Elsevier Ltd. All rights reserved.
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-01-01
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods. PMID:27258276
Shen, Chong; Li, Jie; Zhang, Xiaoming; Shi, Yunbo; Tang, Jun; Cao, Huiliang; Liu, Jun
2016-05-31
The different noise components in a dual-mass micro-electromechanical system (MEMS) gyroscope structure is analyzed in this paper, including mechanical-thermal noise (MTN), electronic-thermal noise (ETN), flicker noise (FN) and Coriolis signal in-phase noise (IPN). The structure equivalent electronic model is established, and an improved white Gaussian noise reduction method for dual-mass MEMS gyroscopes is proposed which is based on sample entropy empirical mode decomposition (SEEMD) and time-frequency peak filtering (TFPF). There is a contradiction in TFPS, i.e., selecting a short window length may lead to good preservation of signal amplitude but bad random noise reduction, whereas selecting a long window length may lead to serious attenuation of the signal amplitude but effective random noise reduction. In order to achieve a good tradeoff between valid signal amplitude preservation and random noise reduction, SEEMD is adopted to improve TFPF. Firstly, the original signal is decomposed into intrinsic mode functions (IMFs) by EMD, and the SE of each IMF is calculated in order to classify the numerous IMFs into three different components; then short window TFPF is employed for low frequency component of IMFs, and long window TFPF is employed for high frequency component of IMFs, and the noise component of IMFs is wiped off directly; at last the final signal is obtained after reconstruction. Rotation experimental and temperature experimental are carried out to verify the proposed SEEMD-TFPF algorithm, the verification and comparison results show that the de-noising performance of SEEMD-TFPF is better than that achievable with the traditional wavelet, Kalman filter and fixed window length TFPF methods.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Stem cell identity and template DNA strand segregation.
Tajbakhsh, Shahragim
2008-12-01
The quest for stem cell properties to distinguish their identity from that of committed daughters has led to a re-investigation of the notion that DNA strands are not equivalent, and 'immortal' DNA strands are retained in stem cells whereas newly replicated DNA strands segregate to the differentiating daughter cell during mitosis. Whether this process occurs only in stem cells, and also in all tissues, remains unclear. That individual chromosomes can be also partitioned non-randomly raises the question if this phenomenon is related to the immortal DNA hypothesis, and it underscores the need for high-resolution techniques to observe these events empirically. Although initially postulated as a mechanism to avoid DNA replication errors, alternative views including epigenetic regulation and sister chromatid silencing may provide insights into this process.
[On the present situation in psychotherapy and its implications - A critical analysis of the facts].
Tschuschke, Volker; Freyberger, Harald J
2015-01-01
The currently dominating research paradigm in evidence-based medicine is expounded and discussed regarding the problems deduced from so-called empirically supported treatments (EST) in psychology and psychotherapy. Prevalent political and economic as well as ideological backgrounds influence the present dominance of the medical model in psychotherapy by implementing the randomized-controlled research design as the standard in the field. It has been demonstrated that randomized controlled trials (RCTs) are inadequate in psychotherapy research, not the least because of the high complexity of the psychotherapy and the relatively weak role of the treatment concept in the change process itself. All major meta-analyses show that the Dodo bird verdict is still alive, thereby demonstrating that the medical model in psychotherapy with its RCT paradigm cannot explain the equivalence paradox. The medical model is inappropriate, so that the contextual model is proposed as an alternative. Extensive process-outcome research is suggested as the only viable and reasonable way to identify highly complex interactions between the many factors regularly involved in change processes in psychotherapy.
ERIC Educational Resources Information Center
McLay, Laurie Kathleen; Sutherland, Dean; Church, John; Tyler-Merrick, Gaye
2013-01-01
Articles that empirically investigated the emergence of untaught equivalence relations among individuals with autism are presented in this review. Systematic searches of academic databases, journals and ancestry searches identified nine studies that met inclusion criteria. These studies were evaluated according to: (a) participants, (b)…
Cross-language synonyms in the lexicons of bilingual infants: one language or two?
Pearson, B Z; Fernández, S; Oller, D K
1995-06-01
This study tests the widely-cited claim from Volterra & Taeschner (1978), which is reinforced by Clark's PRINCIPLE OF CONTRAST (1987), that young simultaneous bilingual children reject cross-language synonyms in their earliest lexicons. The rejection of translation equivalents is taken by Volterra & Taeschner as support for the idea that the bilingual child possesses a single-language system which includes elements from both languages. We examine first the accuracy of the empirical claim and then its adequacy as support for the argument that bilingual children do not have independent lexical systems in each language. The vocabularies of 27 developing bilinguals were recorded at varying intervals between ages 0;8 and 2;6 using the MacArthur CDI, a standardized parent report form in English and Spanish. The two single-language vocabularies of each bilingual child were compared to determine how many pairs of translation equivalents (TEs) were reported for each child at different stages of development. TEs were observed for all children but one, with an average of 30% of all words coded in the two languages, both at early stages (in vocabularies of 2-12 words) and later (up to 500 words). Thus, Volterra & Taeschner's empirical claim was not upheld. Further, the number of TEs in the bilinguals' two lexicons was shown to be similar to the number of lexical items which co-occurred in the monolingual lexicons of two different children, as observed in 34 random pairings for between-child comparisons. It remains to be shown, therefore, that the bilinguals' lexicons are not composed of two independent systems at a very early age. Furthermore, the results appear to rule out the operation of a strong principle of contrast across languages in early bilingualism.
NASA Astrophysics Data System (ADS)
Paramonov, L. E.
2012-05-01
Light scattering by isotropic ensembles of ellipsoidal particles is considered in the Rayleigh-Gans-Debye approximation. It is proved that randomly oriented ellipsoidal particles are optically equivalent to polydisperse randomly oriented spheroidal particles and polydisperse spherical particles. Density functions of the shape and size distributions for equivalent ensembles of spheroidal and spherical particles are presented. In the anomalous diffraction approximation, equivalent ensembles of particles are shown to also have equal extinction, scattering, and absorption coefficients. Consequences of optical equivalence are considered. The results are illustrated by numerical calculations of the angular dependence of the scattering phase function using the T-matrix method and the Mie theory.
ERIC Educational Resources Information Center
McNeil, Nicole M.; Chesney, Dana L.; Matthews, Percival G.; Fyfe, Emily R.; Petersen, Lori A.; Dunwiddie, April E.; Wheeler, Mary C.
2012-01-01
This experiment tested the hypothesis that organizing arithmetic fact practice by equivalent values facilitates children's understanding of math equivalence. Children (M age = 8 years 6 months, N = 104) were randomly assigned to 1 of 3 practice conditions: (a) equivalent values, in which problems were grouped by equivalent sums (e.g., 3 + 4 = 7, 2…
Reciprocity in the electronic stopping of slow ions in matter
NASA Astrophysics Data System (ADS)
Sigmund, P.
2008-04-01
The principle of reciprocity, i.e., the invariance of the inelastic excitation in ion-atom collisions against interchange of projectile and target, has been applied to the electronic stopping cross section of low-velocity ions and tested empirically on ion-target combinations supported by a more or less adequate amount of experimental data. Reciprocity is well obeyed (within ~10%) for many systems studied, and deviations exceeding ~20% are exceptional. Systematic deviations such as gas-solid or metal-insulator differences have been looked for but not identified on the present basis. A direct consequence of reciprocity is the equivalence of Z1 with Z2 structure for random slowing down. This feature is reasonably well supported empirically for ion-target combinations involving carbon, nitrogen, aluminium and argon. Reciprocity may be utilized as a criterion to reject questionable experimental data. In cases where a certain stopping cross section has not been or cannot be measured, the stopping cross section for the inverted system may be available and serve as a first estimate. It is suggested to build in reciprocity as a fundamental requirement into empirical interpolation schemes directed at the stopping of low-velocity ions. Examination of the SRIM and MSTAR codes reveals cases where reciprocity is obeyed accurately, but deviations of up to a factor of two are common. In case of heavy ions such as gold, electronic stopping cross sections predicted by SRIM are asserted to be almost an order of magnitude too high.
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
Asynchronous Replication and Autosome-Pair Non-Equivalence in Human Embryonic Stem Cells
Dutta, Devkanya; Ensminger, Alexander W.; Zucker, Jacob P.; Chess, Andrew
2009-01-01
A number of mammalian genes exhibit the unusual properties of random monoallelic expression and random asynchronous replication. Such exceptional genes include genes subject to X inactivation and autosomal genes including odorant receptors, immunoglobulins, interleukins, pheromone receptors, and p120 catenin. In differentiated cells, random asynchronous replication of interspersed autosomal genes is coordinated at the whole chromosome level, indicative of chromosome-pair non-equivalence. Here we have investigated the replication pattern of the random asynchronously replicating genes in undifferentiated human embryonic stem cells, using fluorescence in situ hybridization based assay. We show that allele-specific replication of X-linked genes and random monoallelic autosomal genes occur in human embryonic stem cells. The direction of replication is coordinated at the whole chromosome level and can cross the centromere, indicating the existence of autosome-pair non-equivalence in human embryonic stem cells. These results suggest that epigenetic mechanism(s) that randomly distinguish between two parental alleles are emerging in the cells of the inner cell mass, the source of human embryonic stem cells. PMID:19325893
Santa Ana, Elizabeth J.; Carroll, Kathleen M.; Añez, Luis; Paris, Manuel; Ball, Samuel A.; Nich, Charla; Frankforter, Tami L.; Suarez-Morales, Lourdes; Szapocznik, José; Martino, Steve
2009-01-01
Despite the fact that the number of Hispanic individuals in need of treatment for substance use problems is increasing internationally, no studies have investigated the extent to which therapists can provide empirically supported treatments to Spanish-speaking clients with adequate fidelity. Twenty-three bilingual Hispanic therapists from five community outpatient treatment programs in the United States were randomly assigned to deliver either three sessions of motivational enhancement therapy (MET) or an equivalent number of drug counseling-as-usual sessions (CAU) in Spanish to 405 Spanish-speaking clients randomly assigned to these conditions. Independent ratings of 325 sessions indicated the adherence/competence rating system had good to excellent interrater reliability and indicated strong support for an a priori defined fundamental MET skill factor. Support for an advanced MET skill factor was relatively weaker. The rating scale indicated significant differences in therapists’ MET adherence and competence across conditions. These findings indicate that the rating system has promise for assessing the performance of therapists who deliver MET in Spanish and suggest that bilingual Spanish-speaking therapists from the community can be trained to implement MET with adequate fidelity and skill using an intensive multisite training and supervision model. PMID:19394164
No complexity–stability relationship in empirical ecosystems
Jacquet, Claire; Moritz, Charlotte; Morissette, Lyne; Legagneux, Pierre; Massol, François; Archambault, Philippe; Gravel, Dominique
2016-01-01
Understanding the mechanisms responsible for stability and persistence of ecosystems is one of the greatest challenges in ecology. Robert May showed that, contrary to intuition, complex randomly built ecosystems are less likely to be stable than simpler ones. Few attempts have been tried to test May's prediction empirically, and we still ignore what is the actual complexity–stability relationship in natural ecosystems. Here we perform a stability analysis of 116 quantitative food webs sampled worldwide. We find that classic descriptors of complexity (species richness, connectance and interaction strength) are not associated with stability in empirical food webs. Further analysis reveals that a correlation between the effects of predators on prey and those of prey on predators, combined with a high frequency of weak interactions, stabilize food web dynamics relative to the random expectation. We conclude that empirical food webs have several non-random properties contributing to the absence of a complexity–stability relationship. PMID:27553393
Lithium-ion battery models: a comparative study and a model-based powerline communication
NASA Astrophysics Data System (ADS)
Saidani, Fida; Hutter, Franz X.; Scurtu, Rares-George; Braunwarth, Wolfgang; Burghartz, Joachim N.
2017-09-01
In this work, various Lithium-ion (Li-ion) battery models are evaluated according to their accuracy, complexity and physical interpretability. An initial classification into physical, empirical and abstract models is introduced. Also known as white
, black
and grey
boxes, respectively, the nature and characteristics of these model types are compared. Since the Li-ion battery cell is a thermo-electro-chemical system, the models are either in the thermal or in the electrochemical state-space. Physical models attempt to capture key features of the physical process inside the cell. Empirical models describe the system with empirical parameters offering poor analytical, whereas abstract models provide an alternative representation. In addition, a model selection guideline is proposed based on applications and design requirements. A complex model with a detailed analytical insight is of use for battery designers but impractical for real-time applications and in situ diagnosis. In automotive applications, an abstract model reproducing the battery behavior in an equivalent but more practical form, mainly as an equivalent circuit diagram, is recommended for the purpose of battery management. As a general rule, a trade-off should be reached between the high fidelity and the computational feasibility. Especially if the model is embedded in a real-time monitoring unit such as a microprocessor or a FPGA, the calculation time and memory requirements rise dramatically with a higher number of parameters. Moreover, examples of equivalent circuit models of Lithium-ion batteries are covered. Equivalent circuit topologies are introduced and compared according to the previously introduced criteria. An experimental sequence to model a 20 Ah cell is presented and the results are used for the purposes of powerline communication.
Linkage analysis of quantitative refraction and refractive errors in the Beaver Dam Eye Study.
Klein, Alison P; Duggal, Priya; Lee, Kristine E; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E; Klein, Barbara E K
2011-07-13
Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10(-4)), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10(-3) and 1.48 × 10(-3), respectively) and to 7p15 of refraction (empiric P = 9.43 × 10(-4)). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10(-3)), a region previously linked to high myopia. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11.
Equivalent crystal theory of alloys
NASA Technical Reports Server (NTRS)
Bozzolo, Guillermo; Ferrante, John
1991-01-01
Equivalent Crystal Theory (ECT) is a new, semi-empirical approach to calculating the energetics of a solid with defects. The theory has successfully reproduced surface energies in metals and semiconductors. The theory of binary alloys to date, both with first-principles and semi-empirical models, has not been very successful in predicting the energetics of alloys. This procedure is used to predict the heats of formation, cohesive energy, and lattice parameter of binary alloys of Cu, Ni, Al, Ag, Au, Pd, and Pt as functions of composition. The procedure accurately reproduces the heats of formation versus composition curves for a variety of binary alloys. The results are then compared with other approaches such as the embedded atom and lattice parameters of alloys from pure metal properties more accurately than Vegard's law is presented.
On the shape of martian dust and water ice aerosols
NASA Astrophysics Data System (ADS)
Pitman, K. M.; Wolff, M. J.; Clancy, R. T.; Clayton, G. C.
2000-10-01
Researchers have often calculated radiative properties of Martian aerosols using either Mie theory for homogeneous spheres or semi-empirical theories. Given that these atmospheric particles are randomly oriented, this approach seems fairly reasonable. However, the idea that randomly oriented nonspherical particles have scattering properties equivalent to even a select subset of spheres is demonstratably false} (Bohren and Huffman 1983; Bohren and Koh 1985, Appl. Optics, 24, 1023). Fortunately, recent computational developments now enable us to directly compute scattering properties for nonspherical particles. We have combined a numerical approach for axisymmetric particle shapes, i.e., cylinders, disks, spheroids (Waterman's T-Matrix approach as improved by Mishchenko and collaborators; cf., Mishchenko et al. 1997, JGR, 102, D14, 16,831), with a multiple-scattering radiative transfer algorithm to constrain the shape of water ice and dust aerosols. We utilize a two-stage iterative process. First, we empirically derive a scattering phase function for each aerosol component (starting with some ``guess'') from radiative transfer models of MGS Thermal Emission Spectrometer Emission Phase Function (EPF) sequences (for details on this step, see Clancy et al., DPS 2000). Next, we perform a series of scattering calculations, adjusting our parameters to arrive at a ``best-fit'' theoretical phase function. In this presentation, we provide details on the second step in our analysis, including the derived phase functions (for several characteristic EPF sequences) as well as the particle properties of the best-fit theoretical models. We provide a sensitivity analysis for the EPF model-data comparisons in terms of perturbations in the particle properties (i.e., range of axial ratios, sizes, refractive indices, etc). This work is supported through NASA grant NAGS-9820 (MJW) and JPL contract no. 961471 (RTC).
Signs of universality in the structure of culture
NASA Astrophysics Data System (ADS)
Băbeanu, Alexandru-Ionuţ; Talman, Leandros; Garlaschelli, Diego
2017-11-01
Understanding the dynamics of opinions, preferences and of culture as whole requires more use of empirical data than has been done so far. It is clear that an important role in driving this dynamics is played by social influence, which is the essential ingredient of many quantitative models. Such models require that all traits are fixed when specifying the "initial cultural state". Typically, this initial state is randomly generated, from a uniform distribution over the set of possible combinations of traits. However, recent work has shown that the outcome of social influence dynamics strongly depends on the nature of the initial state. If the latter is sampled from empirical data instead of being generated in a uniformly random way, a higher level of cultural diversity is found after long-term dynamics, for the same level of propensity towards collective behavior in the short-term. Moreover, if the initial state is randomized by shuffling the empirical traits among people, the level of long-term cultural diversity is in-between those obtained for the empirical and uniformly random counterparts. The current study repeats the analysis for multiple empirical data sets, showing that the results are remarkably similar, although the matrix of correlations between cultural variables clearly differs across data sets. This points towards robust structural properties inherent in empirical cultural states, possibly due to universal laws governing the dynamics of culture in the real world. The results also suggest that this dynamics might be characterized by criticality and involve mechanisms beyond social influence.
Continuous-Time Random Walk with multi-step memory: an application to market dynamics
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2017-11-01
An extended version of the Continuous-Time Random Walk (CTRW) model with memory is herein developed. This memory involves the dependence between arbitrary number of successive jumps of the process while waiting times between jumps are considered as i.i.d. random variables. This dependence was established analyzing empirical histograms for the stochastic process of a single share price on a market within the high frequency time scale. Then, it was justified theoretically by considering bid-ask bounce mechanism containing some delay characteristic for any double-auction market. Our model appeared exactly analytically solvable. Therefore, it enables a direct comparison of its predictions with their empirical counterparts, for instance, with empirical velocity autocorrelation function. Thus, the present research significantly extends capabilities of the CTRW formalism. Contribution to the Topical Issue "Continuous Time Random Walk Still Trendy: Fifty-year History, Current State and Outlook", edited by Ryszard Kutner and Jaume Masoliver.
Kayupov, Erdan; Fillingham, Yale A; Okroj, Kamil; Plummer, Darren R; Moric, Mario; Gerlinger, Tad L; Della Valle, Craig J
2017-03-01
Tranexamic acid is an antifibrinolytic that has been shown to reduce blood loss and the need for transfusions when administered intravenously in total hip arthroplasty. Oral formulations of the drug are available at a fraction of the cost of the intravenous preparation. The purpose of this randomized controlled trial was to determine if oral and intravenous formulations of tranexamic acid have equivalent blood-sparing properties. In this double-blinded trial, 89 patients undergoing primary total hip arthroplasty were randomized to receive 1.95 g of tranexamic acid orally 2 hours preoperatively or a 1-g tranexamic acid intravenous bolus in the operating room prior to incision; 6 patients were eventually excluded for protocol deviations, leaving 83 patients available for study. The primary outcome was the reduction of hemoglobin concentration. Power analysis determined that 28 patients were required in each group with a ±1.0 g/dL hemoglobin equivalence margin between groups with an alpha of 5% and a power of 80%. Equivalence analysis was performed with a two one-sided test (TOST) in which a p value of <0.05 indicated equivalence between treatments. Forty-three patients received intravenous tranexamic acid, and 40 patients received oral tranexamic acid. Patient demographic characteristics were similar between groups, suggesting successful randomization. The mean reduction of hemoglobin was similar between oral and intravenous groups (3.67 g/dL compared with 3.53 g/dL; p = 0.0008, equivalence). Similarly, the mean total blood loss was equivalent between oral and intravenous administration (1,339 mL compared with 1,301 mL; p = 0.034, equivalence). Three patients (7.5%) in the oral group and one patient (2.3%) in the intravenous group were transfused, but the difference was not significant (p = 0.35). None of the patients in either group experienced a thromboembolic event. Oral tranexamic acid provides equivalent reductions in blood loss in the setting of primary total hip arthroplasty, at a greatly reduced cost, compared with the intravenous formulation. Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence.
Healthy-years equivalent: wounded but not yet dead.
Hauber, A Brett
2009-06-01
The quality-adjusted life-year (QALY) has become the dominant measure of health value in health technology assessment in recent decades despite some well-known and fundamental flaws in the preference-elicitation methods used to construct health-state utility weights and the strong assumptions required to construct QALYs as a measure of health value using these utility weights. The healthy-years equivalent (HYE) was proposed as an alternative measure of health value that was purported to overcome many of the limitations of the QALY. The primary argument against the HYE is that it is difficult to estimate and, therefore, impractical. After much debate in the literature, the QALY appears to have won the battle; however, the HYE is not yet dead. Empirical research and recent advances in methods continue to offer evidence of the feasibility using the HYE as a measure of health value and also addresses some of criticisms surrounding the preference-elicitation methods used to estimate the HYE. This article provides a brief review of empirical applications of the HYE and identifies recent advances in empirical estimation that may breathe new life into a valiant, but wounded, measure.
Equivalent income and fair evaluation of health care.
Fleurbaey, Marc; Luchini, Stéphane; Muller, Christophe; Schokkaert, Erik
2013-06-01
We argue that the economic evaluation of health care (cost-benefit analysis) should respect individual preferences and should incorporate distributional considerations. Relying on individual preferences does not imply subjective welfarism. We propose a particular non-welfarist approach, based on the concept of equivalent income, and show how it helps to define distributional weights. We illustrate the feasibility of our approach with empirical results from a pilot survey. Copyright © 2012 John Wiley & Sons, Ltd.
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
NASA Technical Reports Server (NTRS)
Pandey, Apoorva; Chakrabarty, Rajan K.; Liu, Li; Mishchenko, Michael I.
2015-01-01
Soot aggregates (SAs)-fractal clusters of small, spherical carbonaceous monomers-modulate the incoming visible solar radiation and contribute significantly to climate forcing. Experimentalists and climate modelers typically assume a spherical morphology for SAs when computing their optical properties, causing significant errors. Here, we calculate the optical properties of freshly-generated (fractal dimension Df = 1.8) and aged (Df = 2.6) SAs at 550 nm wavelength using the numericallyexact superposition T-Matrix method. These properties were expressed as functions of equivalent aerosol diameters as measured by contemporary aerosol instruments. This work improves upon previous efforts wherein SA optical properties were computed as a function of monomer number, rendering them unusable in practical applications. Future research will address the sensitivity of variation in refractive index, fractal prefactor, and monomer overlap of SAs on the reported empirical relationships.
Olsho, Lauren Ew; Klerman, Jacob A; Wilde, Parke E; Bartlett, Susan
2016-08-01
US fruit and vegetable (FV) intake remains below recommendations, particularly for low-income populations. Evidence on effectiveness of rebates in addressing this shortfall is limited. This study evaluated the USDA Healthy Incentives Pilot (HIP), which offered rebates to Supplemental Nutrition Assistance Program (SNAP) participants for purchasing targeted FVs (TFVs). As part of a randomized controlled trial in Hampden County, Massachusetts, 7500 randomly selected SNAP households received a 30% rebate on TFVs purchased with SNAP benefits. The remaining 47,595 SNAP households in the county received usual benefits. Adults in 5076 HIP and non-HIP households were randomly sampled for telephone surveys, including 24-h dietary recall interviews. Surveys were conducted at baseline (1-3 mo before implementation) and in 2 follow-up rounds (4-6 mo and 9-11 mo after implementation). 2784 adults (1388 HIP, 1396 non-HIP) completed baseline interviews; data were analyzed for 2009 adults (72%) who also completed ≥1 follow-up interview. Regression-adjusted mean TFV intake at follow-up was 0.24 cup-equivalents/d (95% CI: 0.13, 0.34 cup-equivalents/d) higher among HIP participants. Across all fruit and vegetables (AFVs), regression-adjusted mean intake was 0.32 cup-equivalents/d (95% CI: 0.17, 0.48 cup-equivalents/d) higher among HIP participants. The AFV-TFV difference was explained by greater intake of 100% fruit juice (0.10 cup-equivalents/d; 95% CI: 0.02, 0.17 cup-equivalents/d); juice purchases did not earn the HIP rebate. Refined grain intake was 0.43 ounce-equivalents/d lower (95% CI: -0.69, -0.16 ounce-equivalents/d) among HIP participants, possibly indicating substitution effects. Increased AFV intake and decreased refined grain intake contributed to higher Healthy Eating Index-2010 scores among HIP participants (4.7 points; 95% CI: 2.4, 7.1 points). The HIP significantly increased FV intake among SNAP participants, closing ∼20% of the gap relative to recommendations and increasing dietary quality. More research on mechanisms of action is warranted. The HIP trial was registered at clinicaltrials.gov as NCT02651064. © 2016 American Society for Nutrition.
Flacco, Maria Elena; Manzoli, Lamberto; Boccia, Stefania; Capasso, Lorenzo; Aleksovska, Katina; Rosso, Annalisa; Scaioli, Giacomo; De Vito, Corrado; Siliquini, Roberta; Villari, Paolo; Ioannidis, John P A
2015-07-01
To map the current status of head-to-head comparative randomized evidence and to assess whether funding may impact on trial design and results. From a 50% random sample of the randomized controlled trials (RCTs) published in journals indexed in PubMed during 2011, we selected the trials with ≥ 100 participants, evaluating the efficacy and safety of drugs, biologics, and medical devices through a head-to-head comparison. We analyzed 319 trials. Overall, 238,386 of the 289,718 randomized subjects (82.3%) were included in the 182 trials funded by companies. Of the 182 industry-sponsored trials, only 23 had two industry sponsors and only three involved truly antagonistic comparisons. Industry-sponsored trials were larger, more commonly registered, used more frequently noninferiority/equivalence designs, had higher citation impact, and were more likely to have "favorable" results (superiority or noninferiority/equivalence for the experimental treatment) than nonindustry-sponsored trials. Industry funding [odds ratio (OR) 2.8; 95% confidence interval (CI): 1.6, 4.7] and noninferiority/equivalence designs (OR 3.2; 95% CI: 1.5, 6.6), but not sample size, were strongly associated with "favorable" findings. Fifty-five of the 57 (96.5%) industry-funded noninferiority/equivalence trials got desirable "favorable" results. The literature of head-to-head RCTs is dominated by the industry. Industry-sponsored comparative assessments systematically yield favorable results for the sponsors, even more so when noninferiority designs are involved. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
White, Rebecca M. B.; Umaña-Taylor, Adriana J.; Knight, George P.; Zeiders, Katharine H.
2011-01-01
The current study considers methodological challenges in developmental research with linguistically diverse samples of young adolescents. By empirically examining the cross-language measurement equivalence of a measure assessing three components of ethnic identity development (i.e., exploration, resolution, and affirmation) among Mexican American adolescents, the study both assesses the cross-language measurement equivalence of a common measure of ethnic identity and provides an appropriate conceptual and analytical model for researchers needing to evaluate measurement scales translated into multiple languages. Participants are 678 Mexican-origin early adolescents and their mothers. Measures of exploration and resolution achieve the highest levels of equivalence across language versions. The measure of affirmation achieves high levels of equivalence. Results highlight potential ways to correct for any problems of nonequivalence across language versions of the affirmation measure. Suggestions are made for how researchers working with linguistically diverse samples can use the highlighted techniques to evaluate their own translated measures. PMID:22116736
Search Algorithms as a Framework for the Optimization of Drug Combinations
Coquin, Laurence; Schofield, Jennifer; Feala, Jacob D.; Reed, John C.; McCulloch, Andrew D.; Paternostro, Giovanni
2008-01-01
Combination therapies are often needed for effective clinical outcomes in the management of complex diseases, but presently they are generally based on empirical clinical experience. Here we suggest a novel application of search algorithms—originally developed for digital communication—modified to optimize combinations of therapeutic interventions. In biological experiments measuring the restoration of the decline with age in heart function and exercise capacity in Drosophila melanogaster, we found that search algorithms correctly identified optimal combinations of four drugs using only one-third of the tests performed in a fully factorial search. In experiments identifying combinations of three doses of up to six drugs for selective killing of human cancer cells, search algorithms resulted in a highly significant enrichment of selective combinations compared with random searches. In simulations using a network model of cell death, we found that the search algorithms identified the optimal combinations of 6–9 interventions in 80–90% of tests, compared with 15–30% for an equivalent random search. These findings suggest that modified search algorithms from information theory have the potential to enhance the discovery of novel therapeutic drug combinations. This report also helps to frame a biomedical problem that will benefit from an interdisciplinary effort and suggests a general strategy for its solution. PMID:19112483
Experimental Evaluation of Equivalent-Fluid Models for Melamine Foam
NASA Technical Reports Server (NTRS)
Allen, Albert R.; Schiller, Noah H.
2016-01-01
Melamine foam is a soft porous material commonly used in noise control applications. Many models exist to represent porous materials at various levels of fidelity. This work focuses on rigid frame equivalent fluid models, which represent the foam as a fluid with a complex speed of sound and density. There are several empirical models available to determine these frequency dependent parameters based on an estimate of the material flow resistivity. Alternatively, these properties can be experimentally educed using an impedance tube setup. Since vibroacoustic models are generally sensitive to these properties, this paper assesses the accuracy of several empirical models relative to impedance tube measurements collected with melamine foam samples. Diffuse field sound absorption measurements collected using large test articles in a laboratory are also compared with absorption predictions determined using model-based and measured foam properties. Melamine foam slabs of various thicknesses are considered.
Imperial Senate: American Legislative Debates on Empire, 1898-1917
2013-09-01
University Press of Kansas, 1980), 19; Edmund Morris , Theodore Rex (New York: Random House, 2001), 479–80. 118 31 Cong. Rec. 3976 (16 April 1898...mid-paragraph Senator Mason abruptly requested unanimous consent for a vote on McEnery’s resolution and amendments the following Tuesday afternoon...libweb.hawaii.edu/digicoll/annexation/annexation.html. Morris , Edmund. Theodore Rex. New York: Random House, 2001. Musicant, Ivan. Empire by
Boddy, Lynne M; Noonan, Robert J; Kim, Youngwon; Rowlands, Alex V; Welk, Greg J; Knowles, Zoe R; Fairclough, Stuart J
2018-03-28
To examine the comparability of children's free-living sedentary time (ST) derived from raw acceleration thresholds for wrist mounted GENEActiv accelerometer data, with ST estimated using the waist mounted ActiGraph 100count·min -1 threshold. Secondary data analysis. 108 10-11-year-old children (n=43 boys) from Liverpool, UK wore one ActiGraph GT3X+ and one GENEActiv accelerometer on their right hip and left wrist, respectively for seven days. Signal vector magnitude (SVM; mg) was calculated using the ENMO approach for GENEActiv data. ST was estimated from hip-worn ActiGraph data, applying the widely used 100count·min -1 threshold. ROC analysis using 10-fold hold-out cross-validation was conducted to establish a wrist-worn GENEActiv threshold comparable to the hip ActiGraph 100count·min -1 threshold. GENEActiv data were also classified using three empirical wrist thresholds and equivalence testing was completed. Analysis indicated that a GENEActiv SVM value of 51mg demonstrated fair to moderate agreement (Kappa: 0.32-0.41) with the 100count·min -1 threshold. However, the generated and empirical thresholds for GENEActiv devices were not significantly equivalent to ActiGraph 100count·min -1 . GENEActiv data classified using the 35.6mg threshold intended for ActiGraph devices generated significantly equivalent ST estimates as the ActiGraph 100count·min -1 . The newly generated and empirical GENEActiv wrist thresholds do not provide equivalent estimates of ST to the ActiGraph 100count·min -1 approach. More investigation is required to assess the validity of applying ActiGraph cutpoints to GENEActiv data. Future studies are needed to examine the backward compatibility of ST data and to produce a robust method of classifying SVM-derived ST. Copyright © 2018 Sports Medicine Australia. Published by Elsevier Ltd. All rights reserved.
Linkage Analysis of Quantitative Refraction and Refractive Errors in the Beaver Dam Eye Study
Duggal, Priya; Lee, Kristine E.; Cheng, Ching-Yu; Klein, Ronald; Bailey-Wilson, Joan E.; Klein, Barbara E. K.
2011-01-01
Purpose. Refraction, as measured by spherical equivalent, is the need for an external lens to focus images on the retina. While genetic factors play an important role in the development of refractive errors, few susceptibility genes have been identified. However, several regions of linkage have been reported for myopia (2q, 4q, 7q, 12q, 17q, 18p, 22q, and Xq) and for quantitative refraction (1p, 3q, 4q, 7p, 8p, and 11p). To replicate previously identified linkage peaks and to identify novel loci that influence quantitative refraction and refractive errors, linkage analysis of spherical equivalent, myopia, and hyperopia in the Beaver Dam Eye Study was performed. Methods. Nonparametric, sibling-pair, genome-wide linkage analyses of refraction (spherical equivalent adjusted for age, education, and nuclear sclerosis), myopia and hyperopia in 834 sibling pairs within 486 extended pedigrees were performed. Results. Suggestive evidence of linkage was found for hyperopia on chromosome 3, region q26 (empiric P = 5.34 × 10−4), a region that had shown significant genome-wide evidence of linkage to refraction and some evidence of linkage to hyperopia. In addition, the analysis replicated previously reported genome-wide significant linkages to 22q11 of adjusted refraction and myopia (empiric P = 4.43 × 10−3 and 1.48 × 10−3, respectively) and to 7p15 of refraction (empiric P = 9.43 × 10−4). Evidence was also found of linkage to refraction on 7q36 (empiric P = 2.32 × 10−3), a region previously linked to high myopia. Conclusions. The findings provide further evidence that genes controlling refractive errors are located on 3q26, 7p15, 7p36, and 22q11. PMID:21571680
Record statistics of financial time series and geometric random walks
NASA Astrophysics Data System (ADS)
Sabir, Behlool; Santhanam, M. S.
2014-09-01
The study of record statistics of correlated series in physics, such as random walks, is gaining momentum, and several analytical results have been obtained in the past few years. In this work, we study the record statistics of correlated empirical data for which random walk models have relevance. We obtain results for the records statistics of select stock market data and the geometric random walk, primarily through simulations. We show that the distribution of the age of records is a power law with the exponent α lying in the range 1.5≤α≤1.8. Further, the longest record ages follow the Fréchet distribution of extreme value theory. The records statistics of geometric random walk series is in good agreement with that obtained from empirical stock data.
Investigation of empirical damping laws for the space shuttle
NASA Technical Reports Server (NTRS)
Bernstein, E. L.
1973-01-01
An analysis of dynamic test data from vibration testing of a number of aerospace vehicles was made to develop an empirical structural damping law. A systematic attempt was made to fit dissipated energy/cycle to combinations of all dynamic variables. The best-fit laws for bending, torsion, and longitudinal motion are given, with error bounds. A discussion and estimate are made of error sources. Programs are developed for predicting equivalent linear structural damping coefficients and finding the response of nonlinearly damped structures.
Schwartz, Seth J; Benet-Martínez, Verónica; Knight, George P; Unger, Jennifer B; Zamboanga, Byron L; Des Rosiers, Sabrina E; Stephens, Dionne P; Huang, Shi; Szapocznik, José
2014-03-01
The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate 2 sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (Hispanic and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge--that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from 2 independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures--self-esteem, depressive symptoms, and personal identity--were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures--Hispanic and U.S. practices, values, and identifications-met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. 2014 APA
Schwartz, Seth J.; Benet-Martínez, Verónica; Knight, George P.; Unger, Jennifer B.; Zamboanga, Byron L.; Des Rosiers, Sabrina E.; Stephens, Dionne; Huang, Shi; Szapocznik, José
2014-01-01
The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate two sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (heritage and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge – that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English, and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from two independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures – self-esteem, depressive symptoms, and personal identity – were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures – Hispanic and U.S. practices, values, and identifications – met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. PMID:24188146
Empirical synchronized flow in oversaturated city traffic.
Kerner, Boris S; Hemmerle, Peter; Koller, Micha; Hermanns, Gerhard; Klenov, Sergey L; Rehborn, Hubert; Schreckenberg, Michael
2014-09-01
Based on a study of anonymized GPS probe vehicle traces measured by personal navigation devices in vehicles randomly distributed in city traffic, empirical synchronized flow in oversaturated city traffic has been revealed. It turns out that real oversaturated city traffic resulting from speed breakdown in a city in most cases can be considered random spatiotemporal alternations between sequences of moving queues and synchronized flow patterns in which the moving queues do not occur.
NASA Astrophysics Data System (ADS)
Livan, Giacomo; Alfarano, Simone; Scalas, Enrico
2011-07-01
We study some properties of eigenvalue spectra of financial correlation matrices. In particular, we investigate the nature of the large eigenvalue bulks which are observed empirically, and which have often been regarded as a consequence of the supposedly large amount of noise contained in financial data. We challenge this common knowledge by acting on the empirical correlation matrices of two data sets with a filtering procedure which highlights some of the cluster structure they contain, and we analyze the consequences of such filtering on eigenvalue spectra. We show that empirically observed eigenvalue bulks emerge as superpositions of smaller structures, which in turn emerge as a consequence of cross correlations between stocks. We interpret and corroborate these findings in terms of factor models, and we compare empirical spectra to those predicted by random matrix theory for such models.
Duggan, A E; Elliott, C A; Miller, P; Hawkey, C J; Logan, R F A
2009-01-01
Early endoscopy, Helicobacter pylori eradication and empirical acid suppression are commonly used dyspepsia management strategies in primary care but have not been directly compared in a single trial. To compare endoscopy, H. pylori test and refer, H. pylori test and treat and empirical acid suppression for dyspepsia in primary care. Patients presenting to their general practitioner with dyspepsia were randomized to endoscopy, H. pylori'test and treat', H. pylori test and endoscope positives, or empirical therapy with symptoms, patient satisfaction, healthcare costs and cost effectiveness at 12 months being the outcomes. At 2 months, the proportion of patients reporting no or minimal dyspeptic symptoms ranged from 74% for those having early endoscopy to 55% for those on empirical therapy (P = 0.009), but at 1 year, there was little difference among the four strategies. Early endoscopy was associated with fewer subsequent consultations for dyspepsia (P = 0.003). 'Test and treat' resulted in fewer endoscopies overall and was most cost-effective over a range of cost assumptions. Empirical therapy resulted in the lowest initial costs, but the highest rate of subsequent endoscopy. Gastro-oesophageal cancers were found in four patients randomized to the H. pylori testing strategies. While early endoscopy offered some advantages 'Test and treat' was the most cost-effective strategy. In older patients, early endoscopy may be an appropriate strategy in view of the greater risk of malignant disease. © 2008 The Authors. Journal compilation © 2008 Blackwell Publishing Ltd.
Rolls, David A.; Wang, Peng; McBryde, Emma; Pattison, Philippa; Robins, Garry
2015-01-01
We compare two broad types of empirically grounded random network models in terms of their abilities to capture both network features and simulated Susceptible-Infected-Recovered (SIR) epidemic dynamics. The types of network models are exponential random graph models (ERGMs) and extensions of the configuration model. We use three kinds of empirical contact networks, chosen to provide both variety and realistic patterns of human contact: a highly clustered network, a bipartite network and a snowball sampled network of a “hidden population”. In the case of the snowball sampled network we present a novel method for fitting an edge-triangle model. In our results, ERGMs consistently capture clustering as well or better than configuration-type models, but the latter models better capture the node degree distribution. Despite the additional computational requirements to fit ERGMs to empirical networks, the use of ERGMs provides only a slight improvement in the ability of the models to recreate epidemic features of the empirical network in simulated SIR epidemics. Generally, SIR epidemic results from using configuration-type models fall between those from a random network model (i.e., an Erdős-Rényi model) and an ERGM. The addition of subgraphs of size four to edge-triangle type models does improve agreement with the empirical network for smaller densities in clustered networks. Additional subgraphs do not make a noticeable difference in our example, although we would expect the ability to model cliques to be helpful for contact networks exhibiting household structure. PMID:26555701
High-Throughput Physiologically Based Toxicokinetic Models for ToxCast Chemicals
Physiologically based toxicokinetic (PBTK) models aid in predicting exposure doses needed to create tissue concentrations equivalent to those identified as bioactive by ToxCast. We have implemented four empirical and physiologically-based toxicokinetic (TK) models within a new R ...
Is evaluating complementary and alternative medicine equivalent to evaluating the absurd?
Greasley, Pete
2010-06-01
Complementary and alternative therapies such as reflexology and acupuncture have been the subject of numerous evaluations, clinical trials, and systematic reviews, yet the empirical evidence in support of their efficacy remains equivocal. The empirical evaluation of a therapy would normally assume a plausible rationale regarding the mechanism of action. However, examination of the historical background and underlying principles for reflexology, iridology, acupuncture, auricular acupuncture, and some herbal medicines, reveals a rationale founded on the principle of analogical correspondences, which is a common basis for magical thinking and pseudoscientific beliefs such as astrology and chiromancy. Where this is the case, it is suggested that subjecting these therapies to empirical evaluation may be tantamount to evaluating the absurd.
NASA Astrophysics Data System (ADS)
Shore, R. M.; Freeman, M. P.; Gjerloev, J. W.
2018-01-01
We apply the method of data-interpolating empirical orthogonal functions (EOFs) to ground-based magnetic vector data from the SuperMAG archive to produce a series of month length reanalyses of the surface external and induced magnetic field (SEIMF) in 110,000 km2 equal-area bins over the entire northern polar region at 5 min cadence over solar cycle 23, from 1997.0 to 2009.0. Each EOF reanalysis also decomposes the measured SEIMF variation into a hierarchy of spatiotemporal patterns which are ordered by their contribution to the monthly magnetic field variance. We find that the leading EOF patterns can each be (subjectively) interpreted as well-known SEIMF systems or their equivalent current systems. The relationship of the equivalent currents to the true current flow is not investigated. We track the leading SEIMF or equivalent current systems of similar type by intermonthly spatial correlation and apply graph theory to (objectively) group their appearance and relative importance throughout a solar cycle, revealing seasonal and solar cycle variation. In this way, we identify the spatiotemporal patterns that maximally contribute to SEIMF variability over a solar cycle. We propose this combination of EOF and graph theory as a powerful method for objectively defining and investigating the structure and variability of the SEIMF or their equivalent ionospheric currents for use in both geomagnetism and space weather applications. It is demonstrated here on solar cycle 23 but is extendable to any epoch with sufficient data coverage.
GPR random noise reduction using BPD and EMD
NASA Astrophysics Data System (ADS)
Ostoori, Roya; Goudarzi, Alireza; Oskooi, Behrooz
2018-04-01
Ground-penetrating radar (GPR) exploration is a new high-frequency technology that explores near-surface objects and structures accurately. The high-frequency antenna of the GPR system makes it a high-resolution method compared to other geophysical methods. The frequency range of recorded GPR is so wide that random noise recording is inevitable due to acquisition. This kind of noise comes from unknown sources and its correlation to the adjacent traces is nearly zero. This characteristic of random noise along with the higher accuracy of GPR system makes denoising very important for interpretable results. The main objective of this paper is to reduce GPR random noise based on pursuing denoising using empirical mode decomposition. Our results showed that empirical mode decomposition in combination with basis pursuit denoising (BPD) provides satisfactory outputs due to the sifting process compared to the time-domain implementation of the BPD method on both synthetic and real examples. Our results demonstrate that because of the high computational costs, the BPD-empirical mode decomposition technique should only be used for heavily noisy signals.
Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
Electromagnetic backscattering from a random distribution of lossy dielectric scatterers
NASA Technical Reports Server (NTRS)
Lang, R. H.
1980-01-01
Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.
Empirical Approaches to the Birthday Problem
ERIC Educational Resources Information Center
Flores, Alfinio; Cauto, Kevin M.
2012-01-01
This article will describe two activities in which students conduct experiments with random numbers so they can see that having at least one repeated birthday in a group of 40 is not unusual. The first empirical approach was conducted by author Cauto in a secondary school methods course. The second empirical approach was used by author Flores with…
Matsuda, Shuichi; Ogasawara, Takashi; Sugimoto, Shunsuke; Kato, Shinpei; Umezawa, Hiroki; Yano, Toshiaki; Kasamatsu, Norio
2016-06-01
The nursing- and healthcare-associated pneumonia guideline, proposed by the Japan Respiratory Society, recommends that patients at risk of exposure to drug-resistant pathogens, classified as treatment category C, be treated with antipseudomonal antibiotics. This study aimed to prove the non-inferiority of empirical therapy in our hospital compared with guideline-concordant therapy. This was a randomized controlled trial conducted from December 2011 to December 2012. Patients were randomized to the Guideline group receiving guideline-concordant therapy, and the Empiric group treated with sulbactam/ampicillin or ceftriaxone. The primary endpoint was in-hospital relapse of pneumonia and mortality within 30 days, with a predefined non-inferiority margin of 10%. The secondary endpoints included duration, adverse effects, and cost of antibiotic therapy. One hundred and eleven patients were assigned to the Guideline group (n = 55) and the Empiric group (n = 56; 3 of which were excluded). The incidence of relapse and death within 30 days was similar in the Guideline and the Empiric groups (31% vs. 26%, risk difference -4.5%, 95% CI -21.5% to 12.5%). While the duration of antibiotic therapy was slightly shorter in the Guideline group than in the Empiric group (7 vs. 8 days), there were no significant differences in adverse effects or cost. The efficacy of empiric therapy was comparable to guideline-concordant therapy, although non-inferiority was not proven. The administration of broad-spectrum antibiotics to patients at risk of exposure to drug-resistant pathogens may not necessarily improve the prognosis. UMIN000006792. Copyright © 2016 Japanese Society of Chemotherapy and The Japanese Association for Infectious Diseases. Published by Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
NASA Astrophysics Data System (ADS)
El-Jaby, Samy; Tomi, Leena; Sihver, Lembit; Sato, Tatsuhiko; Richardson, Richard B.; Lewis, Brent J.
2014-03-01
This paper describes a methodology for assessing the pre-mission exposure of space crew aboard the International Space Station (ISS) in terms of an effective dose equivalent. In this approach, the PHITS Monte Carlo code was used to assess the particle transport of galactic cosmic radiation (GCR) and trapped radiation for solar maximum and minimum conditions through an aluminum shield thickness. From these predicted spectra, and using fluence-to-dose conversion factors, a scaling ratio of the effective dose equivalent rate to the ICRU ambient dose equivalent rate at a 10 mm depth was determined. Only contributions from secondary neutrons, protons, and alpha particles were considered in this analysis. Measurements made with a tissue equivalent proportional counter (TEPC) located at Service Module panel 327, as captured through a semi-empirical correlation in the ISSCREM code, where then scaled using this conversion factor for prediction of the effective dose equivalent. This analysis shows that at this location within the service module, the total effective dose equivalent is 10-30% less than the total TEPC dose equivalent. Approximately 75-85% of the effective dose equivalent is derived from the GCR. This methodology provides an opportunity for pre-flight predictions of the effective dose equivalent and therefore offers a means to assess the health risks of radiation exposure on ISS flight crew.
1989-09-01
airimn : PME NUMBER OF APPLICABLE RESPONSES PROGRAMS S S~ 1quadron Off icer-S SchoolI. Ai~rCcr 1 .n College, Air War College: self-direct-d Table 11...8217 betweenr- thie number of e:;ist ~p ~msancre:.t7 lcg~stiian weak-nesses (r -. 8545 ) Inditc. r: lo:w pcoitlive correlation e:xists b_-etween thenubr...equivalent) c. Industrial College of the Armed Forces d. Defense Systems Management Course e. Air War College (or equivalent) f. Other (please specify) g
Empirical antibacterial therapy in febrile, granulocytopenic bone marrow transplant patients.
Peterson, P K; McGlave, P; Ramsay, N K; Rhame, F; Goldman, A I; Kersey, J
1984-01-01
Fifty febrile, granulocytopenic allogeneic bone marrow transplant patients receiving prophylactic trimethoprim-sulfamethoxazole were randomized to one of two empirical antibiotic regimens to determine whether a shortened course of empirical therapy was beneficial. Of the 50 patients, 25 received empirical tobramycin and ticarcillin for only 3 days, and 25 were maintained on empirical tobramycin and ticarcillin until they were afebrile and no longer granulocytopenic. Although the incidence of bacterial infections in the two groups was not statistically significantly different, almost twice as many bacterial infections were observed in the group that received the short course of empirical therapy. Furthermore, because of the high incidence of bacterial infection and clinical concerns about occult bacterial sepsis, within 2 weeks of the randomization the overall use of parenteral antibacterial agents was similar in both groups. The incidence of invasive fungal disease and the use of amphotericin B therapy were similar in both groups. The results of this study suggest that little clinical benefit is likely to be seen in bone marrow transplant patients treated with short-course empirical tobramycin and ticarcillin, despite the administration of prophylactic trimethoprim-sulfamethoxazole, and emphasize the need for new strategies to prevent infections with gram-positive and trimethoprim-sulfamethoxazole-resistant gram-negative bacteria in these patients. PMID:6385835
Reynolds, Andy M; Leprêtre, Lisa; Bohan, David A
2013-11-07
Correlated random walks are the dominant conceptual framework for modelling and interpreting organism movement patterns. Recent years have witnessed a stream of high profile publications reporting that many organisms perform Lévy walks; movement patterns that seemingly stand apart from the correlated random walk paradigm because they are discrete and scale-free rather than continuous and scale-finite. Our new study of the movement patterns of Tenebrio molitor beetles in unchanging, featureless arenas provides the first empirical support for a remarkable and deep theoretical synthesis that unites correlated random walks and Lévy walks. It demonstrates that the two models are complementary rather than competing descriptions of movement pattern data and shows that correlated random walks are a part of the Lévy walk family. It follows from this that vast numbers of Lévy walkers could be hiding in plain sight.
Stress intensity factors for long, deep surface flaws in plates under extensional fields
NASA Technical Reports Server (NTRS)
Harms, A. E.; Smith, C. W.
1973-01-01
Using a singular solution for a part circular crack, a Taylor Series Correction Method (TSCM) was verified for extracting stress intensity factors from photoelastic data. Photoelastic experiments were then conducted on plates with part circular and flat bottomed cracks for flaw depth to thickness ratios of 0.25, 0.50 and 0.75 and for equivalent flaw depth to equivalent ellipse length values ranging from 0.066 to 0.319. Experimental results agreed well with the Smith theory but indicated that the use of the ''equivalent'' semi-elliptical flaw results was not valid for a/2c less than 0.20. Best overall agreement for the moderate (a/t approximately 0.5) to deep flaws (a/t approximatelly 0.75) and a/2c greater than 0.15 was found with a semi-empirical theory, when compared on the basis of equivalent flaw depth and area.
ERIC Educational Resources Information Center
Hallberg, Kelly
2013-01-01
This dissertation is a collection of three papers that employ empirical within study comparisons (WSCs) to identify conditions that support causal inference in observational studies. WSC studies empirically estimate the extent to which a given observational study reproduces the result of a randomized clinical trial (RCT) when both share the same…
Mikulich-Gilbertson, Susan K; Wagner, Brandie D; Grunwald, Gary K; Riggs, Paula D; Zerbe, Gary O
2018-01-01
Medical research is often designed to investigate changes in a collection of response variables that are measured repeatedly on the same subjects. The multivariate generalized linear mixed model (MGLMM) can be used to evaluate random coefficient associations (e.g. simple correlations, partial regression coefficients) among outcomes that may be non-normal and differently distributed by specifying a multivariate normal distribution for their random effects and then evaluating the latent relationship between them. Empirical Bayes predictors are readily available for each subject from any mixed model and are observable and hence, plotable. Here, we evaluate whether second-stage association analyses of empirical Bayes predictors from a MGLMM, provide a good approximation and visual representation of these latent association analyses using medical examples and simulations. Additionally, we compare these results with association analyses of empirical Bayes predictors generated from separate mixed models for each outcome, a procedure that could circumvent computational problems that arise when the dimension of the joint covariance matrix of random effects is large and prohibits estimation of latent associations. As has been shown in other analytic contexts, the p-values for all second-stage coefficients that were determined by naively assuming normality of empirical Bayes predictors provide a good approximation to p-values determined via permutation analysis. Analyzing outcomes that are interrelated with separate models in the first stage and then associating the resulting empirical Bayes predictors in a second stage results in different mean and covariance parameter estimates from the maximum likelihood estimates generated by a MGLMM. The potential for erroneous inference from using results from these separate models increases as the magnitude of the association among the outcomes increases. Thus if computable, scatterplots of the conditionally independent empirical Bayes predictors from a MGLMM are always preferable to scatterplots of empirical Bayes predictors generated by separate models, unless the true association between outcomes is zero.
DUTIR at TREC 2009: Chemical IR Track
2009-11-01
We set the Dirichlet prior empirically at 1,500 as recommended in [2]. For example, Topic 15 “ Betaines for peripheral arterial disease” is...converted into the following Indri query: # (combine betaines for peripheral arterial disease ) which produces results rank-equivalent to a simple query
Cross-National Invariance of Children's Temperament
ERIC Educational Resources Information Center
Benson, Nicholas; Oakland, Thomas; Shermis, Mark
2009-01-01
Measurement of temperament is an important endeavor with international appeal; however, cross-national invariance (i.e., equivalence of test scores across countries as established by empirical comparisons) of temperament tests has not been established in published research. This study examines the cross-national invariance of school-aged…
SCF-MO computations have been performed on tetra- to octa-chlorinated dibenzo-p-dioxin congeners (PCDD) using an MNDO-PM3 Hamiltonian. Qualitative relationships were developed between empirical, international-toxic equivalence factors for PCDD congeners and their relati...
Experiments in dilution jet mixing effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from 2-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
Experiments in dilution jet mixing - Effects of multiple rows and non-circular orifices
NASA Technical Reports Server (NTRS)
Holdeman, J. D.; Srinivasan, R.; Coleman, E. B.; Meyers, G. D.; White, C. D.
1985-01-01
Experimental and empirical model results are presented that extend previous studies of the mixing of single-sided and opposed rows of jets in a confined duct flow to include effects of non-circular orifices and double rows of jets. Analysis of the mean temperature data obtained in this investigation showed that the effects of orifice shape and double rows are significant only in the region close to the injection plane, provided that the orifices are symmetric with respect to the main flow direction. The penetration and mixing of jets from 45-degree slanted slots is slightly less than that from equivalent-area symmetric orifices. The penetration from two-dimensional slots is similar to that from equivalent-area closely-spaced rows of holes, but the mixing is slower for the 2-D slots. Calculated mean temperature profiles downstream of jets from non-circular and double rows of orifices, made using an extension developed for a previous empirical model, are shown to be in good agreement with the measured distributions.
Galaxy–galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...
2017-07-21
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Galaxy–galaxy lensing estimators and their covariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Galaxy-galaxy lensing estimators and their covariance properties
NASA Astrophysics Data System (ADS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Feasibility of quasi-random band model in evaluating atmospheric radiance
NASA Technical Reports Server (NTRS)
Tiwari, S. N.; Mirakhur, N.
1980-01-01
The use of the quasi-random band model in evaluating upwelling atmospheric radiation is investigated. The spectral transmittance and total band adsorptance are evaluated for selected molecular bands by using the line by line model, quasi-random band model, exponential sum fit method, and empirical correlations, and these are compared with the available experimental results. The atmospheric transmittance and upwelling radiance were calculated by using the line by line and quasi random band models and were compared with the results of an existing program called LOWTRAN. The results obtained by the exponential sum fit and empirical relations were not in good agreement with experimental results and their use cannot be justified for atmospheric studies. The line by line model was found to be the best model for atmospheric applications, but it is not practical because of high computational costs. The results of the quasi random band model compare well with the line by line and experimental results. The use of the quasi random band model is recommended for evaluation of the atmospheric radiation.
Asymptotic Equivalence of Probability Measures and Stochastic Processes
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2018-03-01
Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.
Geometry of complex networks and topological centrality
NASA Astrophysics Data System (ADS)
Ranjan, Gyan; Zhang, Zhi-Li
2013-09-01
We explore the geometry of complex networks in terms of an n-dimensional Euclidean embedding represented by the Moore-Penrose pseudo-inverse of the graph Laplacian (L). The squared distance of a node i to the origin in this n-dimensional space (lii+), yields a topological centrality index, defined as C∗(i)=1/lii+. In turn, the sum of reciprocals of individual node centralities, ∑i1/C∗(i)=∑ilii+, or the trace of L, yields the well-known Kirchhoff index (K), an overall structural descriptor for the network. To put into context this geometric definition of centrality, we provide alternative interpretations of the proposed indices that connect them to meaningful topological characteristics - first, as forced detour overheads and frequency of recurrences in random walks that has an interesting analogy to voltage distributions in the equivalent electrical network; and then as the average connectedness of i in all the bi-partitions of the graph. These interpretations respectively help establish the topological centrality (C∗(i)) of node i as a measure of its overall position as well as its overall connectedness in the network; thus reflecting the robustness of i to random multiple edge failures. Through empirical evaluations using synthetic and real world networks, we demonstrate how the topological centrality is better able to distinguish nodes in terms of their structural roles in the network and, along with Kirchhoff index, is appropriately sensitive to perturbations/re-wirings in the network.
Empirical likelihood inference in randomized clinical trials.
Zhang, Biao
2017-01-01
In individually randomized controlled trials, in addition to the primary outcome, information is often available on a number of covariates prior to randomization. This information is frequently utilized to undertake adjustment for baseline characteristics in order to increase precision of the estimation of average treatment effects; such adjustment is usually performed via covariate adjustment in outcome regression models. Although the use of covariate adjustment is widely seen as desirable for making treatment effect estimates more precise and the corresponding hypothesis tests more powerful, there are considerable concerns that objective inference in randomized clinical trials can potentially be compromised. In this paper, we study an empirical likelihood approach to covariate adjustment and propose two unbiased estimating functions that automatically decouple evaluation of average treatment effects from regression modeling of covariate-outcome relationships. The resulting empirical likelihood estimator of the average treatment effect is as efficient as the existing efficient adjusted estimators 1 when separate treatment-specific working regression models are correctly specified, yet are at least as efficient as the existing efficient adjusted estimators 1 for any given treatment-specific working regression models whether or not they coincide with the true treatment-specific covariate-outcome relationships. We present a simulation study to compare the finite sample performance of various methods along with some results on analysis of a data set from an HIV clinical trial. The simulation results indicate that the proposed empirical likelihood approach is more efficient and powerful than its competitors when the working covariate-outcome relationships by treatment status are misspecified.
DOT National Transportation Integrated Search
2012-12-01
Traffic is one of the primary inputs in pavement design. Traditional pavement design procedures account for traffic using the equivalent single axle loads (ESALs) accumulated during the life of the pavement structure. This procedure is based on co...
10 CFR 431.383 - Enforcement process for electric motors.
Code of Federal Regulations, 2014 CFR
2014-01-01
... general purpose electric motor of equivalent electrical design and enclosure rather than replacing the... equivalent electrical design and enclosure rather than machining and attaching an endshield. ... sample of up to 20 units will then be randomly selected from one or more subdivided groups within the...
Random Variables: Simulations and Surprising Connections.
ERIC Educational Resources Information Center
Quinn, Robert J.; Tomlinson, Stephen
1999-01-01
Features activities for advanced second-year algebra students in grades 11 and 12. Introduces three random variables and considers an empirical and theoretical probability for each. Uses coins, regular dice, decahedral dice, and calculators. (ASK)
People's Intuitions about Randomness and Probability: An Empirical Study
ERIC Educational Resources Information Center
Lecoutre, Marie-Paule; Rovira, Katia; Lecoutre, Bruno; Poitevineau, Jacques
2006-01-01
What people mean by randomness should be taken into account when teaching statistical inference. This experiment explored subjective beliefs about randomness and probability through two successive tasks. Subjects were asked to categorize 16 familiar items: 8 real items from everyday life experiences, and 8 stochastic items involving a repeatable…
Generalization of one-dimensional solute transport: A stochastic-convective flow conceptualization
NASA Astrophysics Data System (ADS)
Simmons, C. S.
1986-04-01
A stochastic-convective representation of one-dimensional solute transport is derived. It is shown to conceptually encompass solutions of the conventional convection-dispersion equation. This stochastic approach, however, does not rely on the assumption that dispersive flux satisfies Fick's diffusion law. Observable values of solute concentration and flux, which together satisfy a conservation equation, are expressed as expectations over a flow velocity ensemble, representing the inherent random processess that govern dispersion. Solute concentration is determined by a Lagrangian pdf for random spatial displacements, while flux is determined by an equivalent Eulerian pdf for random travel times. A condition for such equivalence is derived for steady nonuniform flow, and it is proven that both Lagrangian and Eulerian pdfs are required to account for specified initial and boundary conditions on a global scale. Furthermore, simplified modeling of transport is justified by proving that an ensemble of effectively constant velocities always exists that constitutes an equivalent representation. An example of how a two-dimensional transport problem can be reduced to a single-dimensional stochastic viewpoint is also presented to further clarify concepts.
Jing, Yu; Li, Jian; Yuan, Lei; Zhao, Xiaoli; Wang, Quanshun; Yu, Li; Zhou, Daobin; Huang, Wenrong
2016-03-01
This randomized, dual-center study compared the efficacy and safety of piperacillin-tazobactam (PTZ) and imipenem-cilastatin (IMP) in hematopoietic stem cell transplantation (HSCT) recipients with febrile neutropenia. HSCT recipients with febrile neutropenia were randomized into two groups receiving either PTZ or IMP as initial empiric antibiotic. Endpoints were defervescence rate after empiric antibiotic for 48 h, success at end of therapy, and side effects. Defervescence within 48 h after empiric antibiotic was observed in 46 patients with PTZ (75.4%) and 59 patients with IMP (95.2%) (p = 0.002). Ten patients (10/46) in the PTZ group and two patients (2/59) in the IMP group switched empiric antibiotics due to recurrent fever (p = 0.005). Success of initial antibiotic with modification was achieved in 34 patients with PTZ (55.7%) and 53 patients with IMP (85.5%) at the end of therapy (p = 0.001). To treat the bacteremia, seven of 10 patients in the PTZ group and one of eight patients in the IMP group needed to switch the empiric antibiotic (p = 0.025). Compared with PTZ, IMP had more gastrointestinal adverse events (p = 0.045). This study demonstrates that IMP had better efficacy than PTZ as an empiric antibiotic for febrile neutropenia in the HSCT setting, but with more gastrointestinal side reactions. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Using expert knowledge for test linking.
Bolsinova, Maria; Hoijtink, Herbert; Vermeulen, Jorine Adinda; Béguin, Anton
2017-12-01
Linking and equating procedures are used to make the results of different test forms comparable. In the cases where no assumption of random equivalent groups can be made some form of linking design is used. In practice the amount of data available to link the two tests is often very limited due to logistic and security reasons, which affects the precision of linking procedures. This study proposes to enhance the quality of linking procedures based on sparse data by using Bayesian methods which combine the information in the linking data with background information captured in informative prior distributions. We propose two methods for the elicitation of prior knowledge about the difference in difficulty of two tests from subject-matter experts and explain how these results can be used in the specification of priors. To illustrate the proposed methods and evaluate the quality of linking with and without informative priors, an empirical example of linking primary school mathematics tests is presented. The results suggest that informative priors can increase the precision of linking without decreasing the accuracy. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Solithromycin for the treatment of community-acquired bacterial pneumonia.
Viasus, Diego; Ramos, Oscar; Ramos, Leidy; Simonetti, Antonella F; Carratalà, Jordi
2017-01-01
Community-acquired pneumonia is a major public health problem worldwide. In recent years, there has been an increase in the frequency of resistance to the antimicrobials such as β-lactams or macrolides which have habitually been used against the causative pathogens. Solithromycin, a next-generation macrolide, is the first fluoroketolide with activity against most of the frequently isolated bacteria in community-acquired pneumonia, including typical and atypical bacteria as well as macrolide-resistant Streptococcus pneumoniae. Areas covered: A detailed assessment of the literature relating to the antimicrobial activity, pharmacokinetic/pharmacodynamic properties, efficacy, tolerability and safety of solithromycin for the treatment of community-acquired bacterial pneumonia Expert commentary: Recent randomized controlled phase II/III trials have demonstrated the equivalent efficacy of oral and intravenous solithromycin compared with fluoroquinolones in patients with lower mild-to-moderate respiratory infections, and have shown that systemic adverse events are comparable between solithromycin and alternative treatments. However, studies of larger populations which are able to identify infrequent adverse events are now needed to confirm these findings. On balance, current data supports solithromycin as a promising therapy for empirical treatment in adults with community-acquired bacterial pneumonia.
Facilitating Spatial Thinking in World Geography Using Web-Based GIS
ERIC Educational Resources Information Center
Jo, Injeong; Hong, Jung Eun; Verma, Kanika
2016-01-01
Advocates for geographic information system (GIS) education contend that learning about GIS promotes students' spatial thinking. Empirical studies are still needed to elucidate the potential of GIS as an instructional tool to support spatial thinking in other geography courses. Using a non-equivalent control group research design, this study…
Structure and Content in Social Cognition: Conceptual and Empirical Analyses.
ERIC Educational Resources Information Center
Edelstein, Wolfgang; And Others
1984-01-01
Conceptual analysis of two perspective-taking tasks identified a number of subtasks calling for equivalent operations of social reasoning within tasks of different content. Subtasks hypothetically formed a logical and developmental sequence of abilities required for decentering. The developmental significance of the hierarchy was tested among 121…
Using statistical equivalence testing logic and mixed model theory an approach has been developed, that extends the work of Stork et al (JABES,2008), to define sufficient similarity in dose-response for chemical mixtures containing the same chemicals with different ratios ...
True Randomness from Big Data.
Papakonstantinou, Periklis A; Woodruff, David P; Yang, Guang
2016-09-26
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
NASA Astrophysics Data System (ADS)
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-09-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests.
Papakonstantinou, Periklis A.; Woodruff, David P.; Yang, Guang
2016-01-01
Generating random bits is a difficult task, which is important for physical systems simulation, cryptography, and many applications that rely on high-quality random bits. Our contribution is to show how to generate provably random bits from uncertain events whose outcomes are routinely recorded in the form of massive data sets. These include scientific data sets, such as in astronomics, genomics, as well as data produced by individuals, such as internet search logs, sensor networks, and social network feeds. We view the generation of such data as the sampling process from a big source, which is a random variable of size at least a few gigabytes. Our view initiates the study of big sources in the randomness extraction literature. Previous approaches for big sources rely on statistical assumptions about the samples. We introduce a general method that provably extracts almost-uniform random bits from big sources and extensively validate it empirically on real data sets. The experimental findings indicate that our method is efficient enough to handle large enough sources, while previous extractor constructions are not efficient enough to be practical. Quality-wise, our method at least matches quantum randomness expanders and classical world empirical extractors as measured by standardized tests. PMID:27666514
Empirical Modeling Of Single-Event Upset
NASA Technical Reports Server (NTRS)
Zoutendyk, John A.; Smith, Lawrence S.; Soli, George A.; Thieberger, Peter; Smith, Stephen L.; Atwood, Gregory E.
1988-01-01
Experimental study presents examples of empirical modeling of single-event upset in negatively-doped-source/drain metal-oxide-semiconductor static random-access memory cells. Data supports adoption of simplified worst-case model in which cross sectionof SEU by ion above threshold energy equals area of memory cell.
Westen, Drew; Shedler, Jonathan; Bradley, Bekh; DeFife, Jared A.
2013-01-01
Objective The authors describe a system for diagnosing personality pathology that is empirically derived, clinically relevant, and practical for day-to-day use. Method A random national sample of psychiatrists and clinical psychologists (N=1,201) described a randomly selected current patient with any degree of personality dysfunction (from minimal to severe) using the descriptors in the Shedler-Westen Assessment Procedure–II and completed additional research forms. Results The authors applied factor analysis to identify naturally occurring diagnostic groupings within the patient sample. The analysis yielded 10 clinically coherent personality diagnoses organized into three higher-order clusters: internalizing, externalizing, and borderline-dysregulated. The authors selected the most highly rated descriptors to construct a diagnostic prototype for each personality syndrome. In a second, independent sample, research interviewers and patients’ treating clinicians were able to diagnose the personality syndromes with high agreement and minimal comorbidity among diagnoses. Conclusions The empirically derived personality prototypes described here provide a framework for personality diagnosis that is both empirically based and clinically relevant. PMID:22193534
Nair, Rajni L.; White, Rebecca M.B.; Knight, George P.; Roosa, Mark W.
2009-01-01
Increasing diversity among families in the United States often necessitates the translation of common measures into various languages. However, even when great care is taken during translations, empirical evaluations of measurement equivalence are necessary. The current study demonstrates the analytic techniques researchers should use to evaluate the measurement equivalence of translated measures. To this end we investigated the cross-language measurement equivalence of several common parenting measures in a sample of 749 Mexican American families. The item invariance results indicated similarity of factor structures across language groups for each of the parenting measures for both mothers and children. Construct validity tests indicated similar slope relations between each of the four parenting measures and the outcomes across the two language groups for both mothers and children. Equivalence in intercepts, however, was only achieved for some outcomes. These findings indicate that the use of these measures in both within group and between group analyses based on correlation/covariance structure is defensible, but researchers are cautioned against interpretations of mean level differences across these language groups. PMID:19803604
Rouzé, Anahita; Loridant, Séverine; Poissy, Julien; Dervaux, Benoit; Sendid, Boualem; Cornu, Marjorie; Nseir, Saad
2017-11-01
The aim of this study was to determine the impact of a biomarker-based strategy on early discontinuation of empirical antifungal treatment. Prospective randomized controlled single-center unblinded study, performed in a mixed ICU. A total of 110 patients were randomly assigned to a strategy in which empirical antifungal treatment duration was determined by (1,3)-β-D-glucan, mannan, and anti-mannan serum assays, performed on day 0 and day 4; or to a routine care strategy, based on international guidelines, which recommend 14 days of treatment. In the biomarker group, early stop recommendation was determined using an algorithm based on the results of biomarkers. The primary outcome was the percentage of survivors discontinuing empirical antifungal treatment early, defined as a discontinuation strictly before day 7. A total of 109 patients were analyzed (one patient withdraw consent). Empirical antifungal treatment was discontinued early in 29 out of 54 patients in the biomarker strategy group, compared with one patient out of 55 in the routine strategy group [54% vs 2%, p < 0.001, OR (95% CI) 62.6 (8.1-486)]. Total duration of antifungal treatment was significantly shorter in the biomarker strategy compared with routine strategy [median (IQR) 6 (4-13) vs 13 (12-14) days, p < 0.0001). No significant difference was found in the percentage of patients with subsequent proven invasive Candida infection, mechanical ventilation-free days, length of ICU stay, cost, and ICU mortality between the two study groups. The use of a biomarker-based strategy increased the percentage of early discontinuation of empirical antifungal treatment among critically ill patients with suspected invasive Candida infection. These results confirm previous findings suggesting that early discontinuation of empirical antifungal treatment had no negative impact on outcome. However, further studies are needed to confirm the safety of this strategy. This trial was registered at ClinicalTrials.gov, NCT02154178.
'Equivalence' and the translation and adaptation of health-related quality of life questionnaires.
Herdman, M; Fox-Rushby, J; Badia, X
1997-04-01
The increasing use of health-related quality of life (HRQOL) questionnaires in multinational studies has resulted in the translation of many existing measures. Guidelines for translation have been published, and there has been some discussion of how to achieve and assess equivalence between source and target questionnaires. Our reading in this area had led us, however, to the conclusion that different types of equivalence were not clearly defined, and that a theoretical framework for equivalence was lacking. To confirm this we reviewed definitions of equivalence in the HRQOL literature on the use of generic questionnaires in multicultural settings. The literature review revealed: definitions of 19 different types of equivalence; vague or conflicting definitions, particularly in the case of conceptual equivalence; and the use of many redundant terms. We discuss these findings in the light of a framework adapted from cross-cultural psychology for describing three different orientations to cross-cultural research: absolutism, universalism and relativism. We suggest that the HRQOL field has generally adopted an absolutist approach and that this may account for some of the confusion in this area. We conclude by suggesting that there is an urgent need for a standardized terminology within the HRQOL field, by offering a standard definition of conceptual equivalence, and by suggesting that the adoption of a universalist orientation would require substantial changes to guidelines and more empirical work on the conceptualization of HRQOL in different cultures.
Empiric Auto-Titrating CPAP in People with Suspected Obstructive Sleep Apnea
Drummond, Fitzgerald; Doelken, Peter; Ahmed, Qanta A.; Gilbert, Gregory E.; Strange, Charlie; Herpel, Laura; Frye, Michael D.
2010-01-01
Objective: Efficient diagnosis and treatment of obstructive sleep apnea (OSA) can be difficult because of time delays imposed by clinic visits and serial overnight polysomnography. In some cases, it may be desirable to initiate treatment for suspected OSA prior to polysomnography. Our objective was to compare the improvement of daytime sleepiness and sleep-related quality of life of patients with high clinical likelihood of having OSA who were randomly assigned to receive empiric auto-titrating continuous positive airway pressure (CPAP) while awaiting polysomnogram versus current usual care. Methods: Serial patients referred for overnight polysomnography who had high clinical likelihood of having OSA were randomly assigned to usual care or immediate initiation of auto-titrating CPAP. Epworth Sleepiness Scale (ESS) scores and the Functional Outcomes of Sleep Questionnaire (FOSQ) scores were obtained at baseline, 1 month after randomization, and again after initiation of fixed CPAP in control subjects and after the sleep study in auto-CPAP patients. Results: One hundred nine patients were randomized. Baseline demographics, daytime sleepiness, and sleep-related quality of life scores were similar between groups. One-month ESS and FOSQ scores were improved in the group empirically treated with auto-titrating CPAP. ESS scores improved in the first month by a mean of −3.2 (confidence interval −1.6 to −4.8, p < 0.001) and FOSQ scores improved by a mean of 1.5, (confidence interval 0.5 to 2.7, p = 0.02), whereas scores in the usual-care group did not change (p = NS). Following therapy directed by overnight polysomnography in the control group, there were no differences in ESS or FOSQ between the groups. No adverse events were observed. Conclusion: Empiric auto-CPAP resulted in symptomatic improvement of daytime sleepiness and sleep-related quality of life in a cohort of patients awaiting polysomnography who had a high pretest probability of having OSA. Additional studies are needed to evaluate the applicability of empiric treatment to other populations. Citation: Drummond F; Doelken P; Ahmed QA; Gilbert GE; Strange C; Herpel L; Frye MD. Empiric auto-titrating CPAP in people with suspected obstructive sleep apnea. J Clin Sleep Med 2010;6(2):140-145. PMID:20411690
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Park, Joonam; Appiah, Williams Agyei; Byun, Seoungwoo; Jin, Dahee; Ryou, Myung-Hyun; Lee, Yong Min
2017-10-01
To overcome the limitation of simple empirical cycle life models based on only equivalent circuits, we attempt to couple a conventional empirical capacity loss model with Newman's porous composite electrode model, which contains both electrochemical reaction kinetics and material/charge balances. In addition, an electrolyte depletion function is newly introduced to simulate a sudden capacity drop at the end of cycling, which is frequently observed in real lithium-ion batteries (LIBs). When simulated electrochemical properties are compared with experimental data obtained with 20 Ah-level graphite/LiFePO4 LIB cells, our semi-empirical model is sufficiently accurate to predict a voltage profile having a low standard deviation of 0.0035 V, even at 5C. Additionally, our model can provide broad cycle life color maps under different c-rate and depth-of-discharge operating conditions. Thus, this semi-empirical model with an electrolyte depletion function will be a promising platform to predict long-term cycle lives of large-format LIB cells under various operating conditions.
Panzacchi, Manuela; Van Moorter, Bram; Strand, Olav; Saerens, Marco; Kivimäki, Ilkka; St Clair, Colleen C; Herfindal, Ivar; Boitani, Luigi
2016-01-01
The loss, fragmentation and degradation of habitat everywhere on Earth prompts increasing attention to identifying landscape features that support animal movement (corridors) or impedes it (barriers). Most algorithms used to predict corridors assume that animals move through preferred habitat either optimally (e.g. least cost path) or as random walkers (e.g. current models), but neither extreme is realistic. We propose that corridors and barriers are two sides of the same coin and that animals experience landscapes as spatiotemporally dynamic corridor-barrier continua connecting (separating) functional areas where individuals fulfil specific ecological processes. Based on this conceptual framework, we propose a novel methodological approach that uses high-resolution individual-based movement data to predict corridor-barrier continua with increased realism. Our approach consists of two innovations. First, we use step selection functions (SSF) to predict friction maps quantifying corridor-barrier continua for tactical steps between consecutive locations. Secondly, we introduce to movement ecology the randomized shortest path algorithm (RSP) which operates on friction maps to predict the corridor-barrier continuum for strategic movements between functional areas. By modulating the parameter Ѳ, which controls the trade-off between exploration and optimal exploitation of the environment, RSP bridges the gap between algorithms assuming optimal movements (when Ѳ approaches infinity, RSP is equivalent to LCP) or random walk (when Ѳ → 0, RSP → current models). Using this approach, we identify migration corridors for GPS-monitored wild reindeer (Rangifer t. tarandus) in Norway. We demonstrate that reindeer movement is best predicted by an intermediate value of Ѳ, indicative of a movement trade-off between optimization and exploration. Model calibration allows identification of a corridor-barrier continuum that closely fits empirical data and demonstrates that RSP outperforms models that assume either optimality or random walk. The proposed approach models the multiscale cognitive maps by which animals likely navigate real landscapes and generalizes the most common algorithms for identifying corridors. Because suboptimal, but non-random, movement strategies are likely widespread, our approach has the potential to predict more realistic corridor-barrier continua for a wide range of species. © 2015 The Authors. Journal of Animal Ecology © 2015 British Ecological Society.
Arithmetic Practice Can Be Modified to Promote Understanding of Mathematical Equivalence
ERIC Educational Resources Information Center
McNeil, Nicole M.; Fyfe, Emily R.; Dunwiddie, April E.
2015-01-01
This experiment tested if a modified version of arithmetic practice facilitates understanding of math equivalence. Children within 2nd-grade classrooms (N = 166) were randomly assigned to practice single-digit addition facts using 1 of 2 workbooks. In the control workbook, problems were presented in the traditional "operations = answer"…
Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong
2013-01-01
For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13.
Yoo, Song Jae; Jang, Han-Ki; Lee, Jai-Ki; Noh, Siwan; Cho, Gyuseong
2013-01-01
For the assessment of external doses due to contaminated environment, the dose-rate conversion factors (DCFs) prescribed in Federal Guidance Report 12 (FGR 12) and FGR 13 have been widely used. Recently, there were significant changes in dosimetric models and parameters, which include the use of the Reference Male and Female Phantoms and the revised tissue weighting factors, as well as the updated decay data of radionuclides. In this study, the DCFs for effective and equivalent doses were calculated for three exposure settings: skyshine, groundshine and water immersion. Doses to the Reference Phantoms were calculated by Monte Carlo simulations with the MCNPX 2.7.0 radiation transport code for 26 mono-energy photons between 0.01 and 10 MeV. The transport calculations were performed for the source volume within the cut-off distances practically contributing to the dose rates, which were determined by a simplified calculation model. For small tissues for which the reduction of variances are difficult, the equivalent dose ratios to a larger tissue (with lower statistical errors) nearby were employed to make the calculation efficient. Empirical response functions relating photon energies, and the organ equivalent doses or the effective doses were then derived by the use of cubic-spline fitting of the resulting doses for 26 energy points. The DCFs for all radionuclides considered important were evaluated by combining the photon emission data of the radionuclide and the empirical response functions. Finally, contributions of accompanied beta particles to the skin equivalent doses and the effective doses were calculated separately and added to the DCFs. For radionuclides considered in this study, the new DCFs for the three exposure settings were within ±10 % when compared with DCFs in FGR 13. PMID:23542764
Empiric auto-titrating CPAP in people with suspected obstructive sleep apnea.
Drummond, Fitzgerald; Doelken, Peter; Ahmed, Qanta A; Gilbert, Gregory E; Strange, Charlie; Herpel, Laura; Frye, Michael D
2010-04-15
Efficient diagnosis and treatment of obstructive sleep apnea (OSA) can be difficult because of time delays imposed by clinic visits and serial overnight polysomnography. In some cases, it may be desirable to initiate treatment for suspected OSA prior to polysomnography. Our objective was to compare the improvement of daytime sleepiness and sleep-related quality of life of patients with high clinical likelihood of having OSA who were randomly assigned to receive empiric auto-titrating continuous positive airway pressure (CPAP) while awaiting polysomnogram versus current usual care. Serial patients referred for overnight polysomnography who had high clinical likelihood of having OSA were randomly assigned to usual care or immediate initiation of auto-titrating CPAP. Epworth Sleepiness Scale (ESS) scores and the Functional Outcomes of Sleep Questionnaire (FOSQ) scores were obtained at baseline, 1 month after randomization, and again after initiation of fixed CPAP in control subjects and after the sleep study in auto-CPAP patients. One hundred nine patients were randomized. Baseline demographics, daytime sleepiness, and sleep-related quality of life scores were similar between groups. One-month ESS and FOSQ scores were improved in the group empirically treated with auto-titrating CPAP. ESS scores improved in the first month by a mean of -3.2 (confidence interval -1.6 to -4.8, p < 0.001) and FOSQ scores improved by a mean of 1.5, (confidence interval 0.5 to 2.7, p = 0.02), whereas scores in the usual-care group did not change (p = NS). Following therapy directed by overnight polysomnography in the control group, there were no differences in ESS or FOSQ between the groups. No adverse events were observed. Empiric auto-CPAP resulted in symptomatic improvement of daytime sleepiness and sleep-related quality of life in a cohort of patients awaiting polysomnography who had a high pretest probability of having OSA. Additional studies are needed to evaluate the applicability of empiric treatment to other populations.
Isacsson, Göran; Nohlert, Eva; Fransson, Anette M C; Bornefalk-Hermansson, Anna; Wiman Eriksson, Eva; Ortlieb, Eva; Trepp, Livia; Avdelius, Anna; Sturebrand, Magnus; Fodor, Clara; List, Thomas; Schumann, Mohamad; Tegelberg, Åke
2018-05-16
The clinical benefit of bibloc over monobloc appliances in treating obstructive sleep apnoea (OSA) has not been evaluated in randomized trials. We hypothesized that the two types of appliances are equally effective in treating OSA. To compare the efficacy of monobloc versus bibloc appliances in a short-term perspective. In this multicentre, randomized, blinded, controlled, parallel-group equivalence trial, patients with OSA were randomly assigned to use either a bibloc or a monobloc appliance. One-night respiratory polygraphy without respiratory support was performed at baseline, and participants were re-examined with the appliance in place at short-term follow-up. The primary outcome was the change in the apnoea-hypopnea index (AHI). An independent person prepared a randomization list and sealed envelopes. Evaluating dentist and the biomedical analysts who evaluated the polygraphy were blinded to the choice of therapy. Of 302 patients, 146 were randomly assigned to use the bibloc and 156 the monobloc device; 123 and 139 patients, respectively, were analysed as per protocol. The mean changes in AHI were -13.8 (95% confidence interval -16.1 to -11.5) in the bibloc group and -12.5 (-14.8 to -10.3) in the monobloc group. The difference of -1.3 (-4.5 to 1.9) was significant within the equivalence interval (P = 0.011; the greater of the two P values) and was confirmed by the intention-to-treat analysis (P = 0.001). The adverse events were of mild character and were experienced by similar percentages of patients in both groups (39 and 40 per cent for the bibloc and monobloc group, respectively). The study shows short-term results with a median time from commencing treatment to the evaluation visit of 56 days and long-term data on efficacy and harm are needed to be fully conclusive. In a short-term perspective, both appliances were equivalent in terms of their positive effects for treating OSA and caused adverse events of similar magnitude. Registered with ClinicalTrials.gov (#NCT02148510).
Empirically Exploring Higher Education Cultures of Assessment
ERIC Educational Resources Information Center
Fuller, Matthew B.; Skidmore, Susan T.; Bustamante, Rebecca M.; Holzweiss, Peggy C.
2016-01-01
Although touted as beneficial to student learning, cultures of assessment have not been examined adequately using validated instruments. Using data collected from a stratified, random sample (N = 370) of U.S. institutional research and assessment directors, the models tested in this study provide empirical support for the value of using the…
Smooth empirical Bayes estimation of observation error variances in linear systems
NASA Technical Reports Server (NTRS)
Martz, H. F., Jr.; Lian, M. W.
1972-01-01
A smooth empirical Bayes estimator was developed for estimating the unknown random scale component of each of a set of observation error variances. It is shown that the estimator possesses a smaller average squared error loss than other estimators for a discrete time linear system.
High School Students and Online Commemoration of the Group's Cultural Trauma
ERIC Educational Resources Information Center
Lazar, Alon; Hirsch, Tal Litvak
2014-01-01
This paper addresses the interaction of three equivalent issues: education, cultural trauma and the Internet. Theory suggests that the educational system plays an important role in the transmission and maintenance of the memory of a group's defining cultural trauma. However little is empirically known of the ways education influences the attitudes…
ERIC Educational Resources Information Center
Ardoin, Scott P.; Williams, Jessica C.; Christ, Theodore J.; Klubnik, Cynthia; Wellborn, Claire
2010-01-01
Beyond reliability and validity, measures used to model student growth must consist of multiple probes that are equivalent in level of difficulty to establish consistent measurement conditions across time. Although existing evidence supports the reliability of curriculum-based measurement in reading (CBMR), few studies have empirically evaluated…
Developing Student, Family, and School Constructs from NLTS2 Data
ERIC Educational Resources Information Center
Shogren, Karrie A.; Garnier Villarreal, Mauricio
2015-01-01
The purpose of this study was to use data from the National Longitudinal Transition Study-2 (NLTS2) to (a) conceptually identify and empirically establish student, family, and school constructs; (b) explore the degree to which the constructs can be measured equivalently across disability groups; and (c) examine latent differences (means,…
Developing Student, Family, and School Constructs from NTLS2 Data
ERIC Educational Resources Information Center
Shogren, Karrie A.; Garnier Villarreal, Mauricio
2015-01-01
The purpose of this study was to use data from the National Longitudinal Transition Study-2 to (a) conceptually identify and empirically establish student, family, and school constructs, (b) explore the degree to which the constructs can be measured equivalently across disability groups, and (c) examine latent differences (means, variances, and…
Essential Properties of Language, or, Why Language Is Not a Code
ERIC Educational Resources Information Center
Kravchenko, Alexander V.
2007-01-01
Despite a strong tradition of viewing "coded equivalence" as the underlying principle of linguistic semiotics, it lacks the power needed to understand and explain language as an empirical phenomenon characterized by complex dynamics. Applying the biology of cognition to the nature of the human cognitive/linguistic capacity as rooted in the…
Data gap filling techniques are commonly used to predict hazard in the absence of empirical data. The most established techniques are read-across, trend analysis and quantitative structure-activity relationships (QSARs). Toxic equivalency factors (TEFs) are less frequently used d...
Examining Adult Basic Education in Indiana
ERIC Educational Resources Information Center
Hawkins, Alishea
2017-01-01
While it is known that over 500,000 individuals in the State of Indiana have not obtained a High School Diploma or Equivalency (StatsIndiana, 2015), limited empirical information exists on Indiana students pursuing adult basic education along with implications for a state that has changed its adult basic education high stakes high school…
NASA Astrophysics Data System (ADS)
Veyette, Mark J.; Muirhead, Philip S.; Mann, Andrew W.; Brewer, John M.; Allard, France; Homeier, Derek
2017-12-01
The ability to perform detailed chemical analysis of Sun-like F-, G-, and K-type stars is a powerful tool with many applications, including studying the chemical evolution of the Galaxy and constraining planet formation theories. Unfortunately, complications in modeling cooler stellar atmospheres hinders similar analyses of M dwarf stars. Empirically calibrated methods to measure M dwarf metallicity from moderate-resolution spectra are currently limited to measuring overall metallicity and rely on astrophysical abundance correlations in stellar populations. We present a new, empirical calibration of synthetic M dwarf spectra that can be used to infer effective temperature, Fe abundance, and Ti abundance. We obtained high-resolution (R ˜ 25,000), Y-band (˜1 μm) spectra of 29 M dwarfs with NIRSPEC on Keck II. Using the PHOENIX stellar atmosphere modeling code (version 15.5), we generated a grid of synthetic spectra covering a range of temperatures, metallicities, and alpha-enhancements. From our observed and synthetic spectra, we measured the equivalent widths of multiple Fe I and Ti I lines and a temperature-sensitive index based on the FeH band head. We used abundances measured from widely separated solar-type companions to empirically calibrate transformations to the observed indices and equivalent widths that force agreement with the models. Our calibration achieves precisions in T eff, [Fe/H], and [Ti/Fe] of 60 K, 0.1 dex, and 0.05 dex, respectively, and is calibrated for 3200 K < T eff < 4100 K, -0.7 < [Fe/H] < +0.3, and -0.05 < [Ti/Fe] < +0.3. This work is a step toward detailed chemical analysis of M dwarfs at a precision similar to what has been achieved for FGK stars.
An Empirical Non-TNT Approach to Launch Vehicle Explosion Modeling
NASA Technical Reports Server (NTRS)
Blackwood, James M.; Skinner, Troy; Richardson, Erin H.; Bangham, Michal E.
2015-01-01
In an effort to increase crew survivability from catastrophic explosions of Launch Vehicles (LV), a study was conducted to determine the best method for predicting LV explosion environments in the near field. After reviewing such methods as TNT equivalence, Vapor Cloud Explosion (VCE) theory, and Computational Fluid Dynamics (CFD), it was determined that the best approach for this study was to assemble all available empirical data from full scale launch vehicle explosion tests and accidents. Approximately 25 accidents or full-scale tests were found that had some amount of measured blast wave, thermal, or fragment explosion environment characteristics. Blast wave overpressure was found to be much lower in the near field than predicted by most TNT equivalence methods. Additionally, fragments tended to be larger, fewer, and slower than expected if the driving force was from a high explosive type event. In light of these discoveries, a simple model for cryogenic rocket explosions is presented. Predictions from this model encompass all known applicable full scale launch vehicle explosion data. Finally, a brief description of on-going analysis and testing to further refine the launch vehicle explosion environment is discussed.
A simple model for pollen-parent fecundity distributions in bee-pollinated forage legume polycrosses
USDA-ARS?s Scientific Manuscript database
Random mating or panmixis is a fundamental assumption in quantitative genetic theory. Random mating is sometimes thought to occur in actual fact although a large body of empirical work shows that this is often not the case in nature. Models have been developed to model many non-random mating phenome...
Chaos Modeling: Increasing Educational Researchers' Awareness of a New Tool.
ERIC Educational Resources Information Center
Bobner, Ronald F.; And Others
Chaos theory is being used as a tool to study a wide variety of phenomena. It is a philosophical and empirical approach that attempts to explain relationships previously thought to be totally random. Although some relationships are truly random, many data appear to be random but reveal repeatable patterns of behavior under further investigation.…
ERIC Educational Resources Information Center
Hunt, Jessica H.
2014-01-01
The purpose of this study was to examine the effects of a Tier 2 supplemental intervention focused on rational number equivalency concepts and applications on the mathematics performance of third-grade students with and without mathematics difficulties. The researcher used a pretest-posttest control group design and random assignment of 19…
ERIC Educational Resources Information Center
Kariuki, Patrick; Gentry, Christi
2010-01-01
The purpose of this study was to examine the effects of Accelerated Math utilization on students' grade equivalency scores. Twelve students for both experimental and control groups were randomly selected from 37 students enrolled in math in grades four through six. The experimental group consisted of the students who actively participated in…
Section Preequating under the Equivalent Groups Design without IRT
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam
2014-01-01
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
Kerschbamer, Rudolf
2015-05-01
This paper proposes a geometric delineation of distributional preference types and a non-parametric approach for their identification in a two-person context. It starts with a small set of assumptions on preferences and shows that this set (i) naturally results in a taxonomy of distributional archetypes that nests all empirically relevant types considered in previous work; and (ii) gives rise to a clean experimental identification procedure - the Equality Equivalence Test - that discriminates between archetypes according to core features of preferences rather than properties of specific modeling variants. As a by-product the test yields a two-dimensional index of preference intensity.
Discriminative components of data.
Peltonen, Jaakko; Kaski, Samuel
2005-01-01
A simple probabilistic model is introduced to generalize classical linear discriminant analysis (LDA) in finding components that are informative of or relevant for data classes. The components maximize the predictability of the class distribution which is asymptotically equivalent to 1) maximizing mutual information with the classes, and 2) finding principal components in the so-called learning or Fisher metrics. The Fisher metric measures only distances that are relevant to the classes, that is, distances that cause changes in the class distribution. The components have applications in data exploration, visualization, and dimensionality reduction. In empirical experiments, the method outperformed, in addition to more classical methods, a Renyi entropy-based alternative while having essentially equivalent computational cost.
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
Music Preferences and Their Relationship to Behaviors, Beliefs, and Attitudes toward Aggression
ERIC Educational Resources Information Center
Devlin, James M.; Seidel, Steven
2009-01-01
The content of violence within media has significantly increased over the years and has been approached using a diverse empirical research representation. Within these empirical attempts, the area of music violence has only been approached through the utilization of randomized experiments and thereby presses the need to explore the alternative…
An Empirical Study on the Job Satisfaction of College Graduates in China
ERIC Educational Resources Information Center
Changjun, Yue
2014-01-01
This study used nationwide, randomly sampled data from the Peking University Institute of Economics of Education 2011 survey of college graduates to conduct an empirical analysis of their job satisfaction. The results indicate that work-related factors have a significant effect on the job satisfaction of college graduates, while nonwork factors…
Empirical Histograms in Item Response Theory with Ordinal Data
ERIC Educational Resources Information Center
Woods, Carol M.
2007-01-01
The purpose of this research is to describe, test, and illustrate a new implementation of the empirical histogram (EH) method for ordinal items. The EH method involves the estimation of item response model parameters simultaneously with the approximation of the distribution of the random latent variable (theta) as a histogram. Software for the EH…
Self-Published Books: An Empirical "Snapshot"
ERIC Educational Resources Information Center
Bradley, Jana; Fulton, Bruce; Helm, Marlene
2012-01-01
The number of books published by authors using fee-based publication services, such as Lulu and AuthorHouse, is overtaking the number of books published by mainstream publishers, according to Bowker's 2009 annual data. Little empirical research exists on self-published books. This article presents the results of an investigation of a random sample…
ERIC Educational Resources Information Center
Simons, Lori; Giorgio, Tina; Houston, Hank; Jacobucci, Ray
2007-01-01
A secondary analysis of a quasi-experimental study was conducted to evaluate differences in students 'perceptions of empirically supported treatments (ESTS) randomized to experimental (n= 10) and attention-control (n= 10) manual-based therapy interventions. The results indicated that attitudinal changes took place for both groups. The results…
From Discovery to Justification: Outline of an Ideal Research Program in Empirical Psychology
Witte, Erich H.; Zenker, Frank
2017-01-01
The gold standard for an empirical science is the replicability of its research results. But the estimated average replicability rate of key-effects that top-tier psychology journals report falls between 36 and 39% (objective vs. subjective rate; Open Science Collaboration, 2015). So the standard mode of applying null-hypothesis significance testing (NHST) fails to adequately separate stable from random effects. Therefore, NHST does not fully convince as a statistical inference strategy. We argue that the replicability crisis is “home-made” because more sophisticated strategies can deliver results the successful replication of which is sufficiently probable. Thus, we can overcome the replicability crisis by integrating empirical results into genuine research programs. Instead of continuing to narrowly evaluate only the stability of data against random fluctuations (discovery context), such programs evaluate rival hypotheses against stable data (justification context). PMID:29163256
Jarbol, Dorte Ejg; Bech, Mickael; Kragstrup, Jakob; Havelund, Troels; Schaffalitzky de Muckadell, Ove B
2006-01-01
An economic evaluation was performed of empirical antisecretory therapy versus test for Helicobacter pylori in the management of dyspepsia patients presenting in primary care. A randomized trial in 106 general practices in the County of Funen, Denmark, was designed to include prospective collection of clinical outcome measures and resource utilization data. Dyspepsia patients (n = 722) presenting in general practice with more than 2 weeks of epigastric pain or discomfort were managed according to one of three initial management strategies: (i) empirical antisecretory therapy, (ii) testing for Helicobacter pylori, or (iii) empirical antisecretory therapy, followed by Helicobacter pylori testing if symptoms improved. Cost-effectiveness and incremental cost-effectiveness ratios of the strategies were determined. The mean proportion of days without dyspeptic symptoms during the 1-year follow-up was 0.59 in the group treated with empirical antisecretory therapy, 0.57 in the H. pylori test-and-eradicate group, and 0.53 in the combination group. After 1 year, 23 percent, 26 percent, and 22 percent, respectively, were symptom-free. Applying the proportion of days without dyspeptic symptoms, the cost-effectiveness for empirical treatment, H. pylori test and the combination were 12,131 Danish kroner (DKK), 9,576 DKK, and 7,301 DKK, respectively. The incremental cost-effectiveness going from the combination strategy to empirical antisecretory treatment or H. pylori test alone was 54,783 DKK and 39,700 DKK per additional proportion of days without dyspeptic symptoms. Empirical antisecretory therapy confers a small insignificant benefit but costs more than strategies based on test for H. pylori and is probably not a cost-effective strategy for the management of dyspepsia in primary care.
Galactic and solar radiation exposure to aircrew during a solar cycle.
Lewis, B J; Bennett, L G I; Green, A R; McCall, M J; Ellaschuk, B; Butler, A; Pierre, M
2002-01-01
An on-going investigation using a tissue-equivalent proportional counter (TEPC) has been carried out to measure the ambient dose equivalent rate of the cosmic radiation exposure of aircrew during a solar cycle. A semi-empirical model has been derived from these data to allow for the interpolation of the dose rate for any global position. The model has been extended to an altitude of up to 32 km with further measurements made on board aircraft and several balloon flights. The effects of changing solar modulation during the solar cycle are characterised by correlating the dose rate data to different solar potential models. Through integration of the dose-rate function over a great circle flight path or between given waypoints, a Predictive Code for Aircrew Radiation Exposure (PCAIRE) has been further developed for estimation of the route dose from galactic cosmic radiation exposure. This estimate is provided in units of ambient dose equivalent as well as effective dose, based on E/H x (10) scaling functions as determined from transport code calculations with LUIN and FLUKA. This experimentally based treatment has also been compared with the CARI-6 and EPCARD codes that are derived solely from theoretical transport calculations. Using TEPC measurements taken aboard the International Space Station, ground based neutron monitoring, GOES satellite data and transport code analysis, an empirical model has been further proposed for estimation of aircrew exposure during solar particle events. This model has been compared to results obtained during recent solar flare events.
Asymptotic Standard Errors for Item Response Theory True Score Equating of Polytomous Items
ERIC Educational Resources Information Center
Cher Wong, Cheow
2015-01-01
Building on previous works by Lord and Ogasawara for dichotomous items, this article proposes an approach to derive the asymptotic standard errors of item response theory true score equating involving polytomous items, for equivalent and nonequivalent groups of examinees. This analytical approach could be used in place of empirical methods like…
ERIC Educational Resources Information Center
Smith, Kenneth H.
2011-01-01
The Inviting School Survey-Revised (ISS-R) was adapted and translated into Traditional Chinese (ISS-RC), using a five-step process, based on international test administration guidelines, involving judgmental, logical, and empirical methods. Both versions were administered to a convenience sample of Chinese-English fluent Hong Kong school community…
A Comparison of Uniform DIF Effect Size Estimators under the MIMIC and Rasch Models
ERIC Educational Resources Information Center
Jin, Ying; Myers, Nicholas D.; Ahn, Soyeon; Penfield, Randall D.
2013-01-01
The Rasch model, a member of a larger group of models within item response theory, is widely used in empirical studies. Detection of uniform differential item functioning (DIF) within the Rasch model typically employs null hypothesis testing with a concomitant consideration of effect size (e.g., signed area [SA]). Parametric equivalence between…
Climate specific thermomechanical fatigue of flat plate photovoltaic module solder joints
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bosco, Nick; Silverman, Timothy J.; Kurtz, Sarah
FEM simulations of PbSn solder fatigue damage are used to evaluate seven cities that represent a variety of climatic zones. It is shown that the rate of solder fatigue damage is not ranked with the cities' climate designations. For an accurate ranking, the mean maximum daily temperature, daily temperature change and a characteristic of clouding events are all required. A physics-based empirical equation is presented that accurately calculates solder fatigue damage according to these three factors. An FEM comparison of solder damage accumulated through service and thermal cycling demonstrates the number of cycles required for an equivalent exposure. For anmore » equivalent 25-year exposure, the number of thermal cycles (-40 degrees C to 85 degrees C) required ranged from roughly 100 to 630 for the cities examined. It is demonstrated that increasing the maximum cycle temperature may significantly reduce the number of thermal cycles required for an equivalent exposure.« less
ERIC Educational Resources Information Center
Menold, Natalja; Tausch, Anja
2016-01-01
Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used,…
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
ERIC Educational Resources Information Center
Khandekar, Aradhana; Sharma, Anuradha
2005-01-01
Purpose: The purpose of this article is to examine the role of human resource capability (HRC) in organisational performance and sustainable competitive advantage (SCA) in Indian global organisations. Design/Methodology/Approach: To carry out the present study, an empirical research on a random sample of 300 line or human resource managers from…
Sharma, Namrata; Goel, Manik; Bansal, Shubha; Agarwal, Prakashchand; Titiyal, Jeewan S; Upadhyaya, Ashish D; Vajpayee, Rasik B
2013-06-01
To compare the equivalence of moxifloxacin 0.5% with a combination of fortified cefazolin sodium 5% and tobramycin sulfate 1.3% eye drops in the treatment of moderate bacterial corneal ulcers. Randomized, controlled, equivalence clinical trial. Microbiologically proven cases of bacterial corneal ulcers were enrolled in the study and were allocated randomly to 1 of the 2 treatment groups. Group A was given combination therapy (fortified cefazolin sodium 5% and tobramycin sulfate) and group B was given monotherapy (moxifloxacin 0.5%). The primary outcome variable for the study was percentage of the ulcers healed at 3 months. The secondary outcome variables were best-corrected visual acuity and resolution of infiltrates. Of a total of 224 patients with bacterial keratitis, 114 patients were randomized to group A, whereas 110 patients were randomized to group B. The mean ± standard deviation ulcer size in groups A and B were 4.2 ± 2 and 4.41 ± 1.5 mm, respectively. The prevalence of coagulase-negative Staphylococcus (40.9% in group A and 48.2% in group B) was similar in both the study groups. A complete resolution of keratitis and healing of ulcers occurred in 90 patients (81.8%) in group A and 88 patients (81.4%) in group B at 3 months. The observed percentage of healing at 3 months was less than the equivalence margin of 20%. Worsening of ulcer was seen in 18.2% cases in group A and in 18.5% cases in group B. Mean time to epithelialization was similar, and there was no significant difference in the 2 groups (P = 0.065). No serious events attributable to therapy were reported. Corneal healing using 0.5% moxifloxacin monotherapy is equivalent to that of combination therapy using fortified cefazolin and tobramycin in the treatment of moderate bacterial corneal ulcers. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Ophthalmic randomized controlled trials reports: the statement of the hypothesis.
Lee, Chun Fan; Cheng, Andy Chi On; Fong, Daniel Yee Tak
2014-01-01
To evaluate whether the ophthalmic randomized controlled trials (RCTs) were designed properly, their hypotheses stated clearly, and their conclusions drawn correctly. A systematic review of 206 ophthalmic RCTs. The objective statement, methods, and results sections and the conclusions of RCTs published in 4 major general clinical ophthalmology journals from 2009 through 2011 were assessed. The clinical objective and specific hypothesis were the main outcome measures. The clinical objective of the trial was presented in 199 (96.6%) studies and the hypothesis was specified explicitly in 56 (27.2%) studies. One hundred ninety (92.2%) studies tested superiority. Among them, 17 (8.3%) studies comparing 2 or more active treatments concluded equal or similar effectiveness between the 2 arms after obtaining insignificant results. There were 5 noninferiority studies and 4 equivalence studies. How the treatments were compared was not mentioned in 1 of the noninferiority studies. Two of the equivalence studies did not specify the equivalence margin and used tests for detecting difference rather than confirming equivalence. The clinical objective commonly was stated, but the prospectively defined hypothesis tended to be understated in ophthalmic RCTs. Superiority was the most common type of comparison. Conclusions made in some of them with negative results were not consistent with the hypothesis, indicating that noninferiority or equivalence may be a more appropriate design. Flaws were common in the noninferiority and equivalence studies. Future ophthalmic researchers should choose the type of comparison carefully, specify the hypothesis clearly, and draw conclusions that are consistent with the hypothesis. Copyright © 2014 Elsevier Inc. All rights reserved.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
Hooten, W Michael; Qu, Wenchun; Townsend, Cynthia O; Judd, Jeffrey W
2012-04-01
Strength training and aerobic exercise have beneficial effects on pain in adults with fibromyalgia. However, the equivalence of strengthening and aerobic exercise has not been reported. The primary aim of this randomized equivalence trial involving patients with fibromyalgia admitted to an interdisciplinary pain treatment program was to test the hypothesis that strengthening (n=36) and aerobic (n=36) exercise have equivalent effects (95% confidence interval within an equivalence margin ± 8) on pain, as measured by the pain severity subscale of the Multidimensional Pain Inventory. Secondary aims included determining the effects of strengthening and aerobic exercise on peak Vo(2) uptake, leg strength, and pressure pain thresholds. In an intent-to-treat analysis, the mean (± standard deviation) pain severity scores for the strength and aerobic groups at study completion were 34.4 ± 11.5 and 37.6 ± 11.9, respectively. The group difference was -3.2 (95% confidence interval, -8.7 to 2.3), which was within the equivalence margin of Δ8. Significant improvements in pain severity (P<.001), peak Vo(2) (P<.001), strength (P<.001), and pain thresholds (P<.001) were observed from baseline to week 3 in the intent-to-treat analysis; however, patients in the aerobic group (mean change 2.0 ± 2.6 mL/kg/min) experienced greater gains (P<.013) in peak Vo(2) compared to the strength group (mean change 0.4 ± 2.6 mL/kg/min). Knowledge of the equivalence and physiological effects of exercise have important clinical implications that could allow practitioners to target exercise recommendations on the basis of comorbid medical conditions or patient preference for a particular type of exercise. This study found that strength and aerobic exercise had equivalent effects on reducing pain severity among patients with fibromyalgia. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
ERIC Educational Resources Information Center
Luque, David; Moris, Joaquin; Orgaz, Cristina; Cobos, Pedro L.; Matute, Helena
2011-01-01
Backward blocking (BB) and interference between cues (IbC) are cue competition effects produced by very similar manipulations. In a standard BB design, both effects might occur simultaneously, which implies a potential problem for studying BB. In the present study with humans, the magnitude of both effects was compared using a non-causal scenario…
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled
ERIC Educational Resources Information Center
Wong, Vivian C.; Steiner, Peter M.
2015-01-01
Across the disciplines of economics, political science, public policy, and now, education, the randomized controlled trial (RCT) is the preferred methodology for establishing causal inference about program impacts. But randomized experiments are not always feasible because of ethical, political, and/or practical considerations, so non-experimental…
A Semantic Differential Evaluation of Attitudinal Outcomes of Introductory Physical Science.
ERIC Educational Resources Information Center
Hecht, Alfred Roland
This study was designed to assess the attitudinal outcomes of Introductory Physical Science (IPS) curriculum materials used in schools. Random samples of 240 students receiving IPS instruction and 240 non-science students were assigned to separate Solomon four-group designs with non-equivalent control groups. Random samples of 60 traditional…
Chazot, Charles; Terrat, Jean Claude; Dumoulin, Alexandre; Ang, Kim-Seng; Gassia, Jean Paul; Chedid, Khalil; Maurice, Francois; Canaud, Bernard
2009-02-01
Darbepoetin alfa is an erythropoiesis-stimulating agent (ESA) used either intravenously or subcutaneously with no dose penalty; however, the direct switch from subcutaneous recombinant human erythropoietin (rHuEPO) to intravenous darbepoetin has barely been studied. To establish the equivalence of a direct switch from subcutaneous rHuEPO to intravenous darbepoetin versus an indirect switch from subcutaneous rHuEPO to intravenous darbepoetin after 2 months of subcutaneous darbepoetin in patients undergoing hemodialysis. In this open, randomized, 6-month, prospective study, patients with end-stage kidney disease who were on hemodialysis were randomized into 2 groups: direct switch from subcutaneous rHuEPO to intravenous darbepoetin (group 1) and indirect switch from subcutaneous rHuEPO to intravenous darbepoetin after 2 months of subcutaneous darbepoetin (group 2). A third, nonrandomized group (control), consisting of patients treated with intravenous rHuEPO who were switched to intravenous darbepoetin, was also studied to reflect possible variations of hemoglobin (Hb) levels due to change from one type of ESA to the other. The primary outcome was the proportion of patients with stable Hb levels at month 6. Secondary endpoints included Hb stability at month 3, dosage requirements for darbepoetin, and safety of the administration route. Among 154 randomized patients, the percentages with stable Hb levels were equivalent in groups 1 and 2, respectively, at month 3 (86.0% vs 91.3%) and month 6 (82.1% vs 81.6%; difference -0.5 [90% CI -12.8 to 11.8]). Mean Hb levels between baseline and month 6 remained stable in both groups, with no variation in mean darbepoetin dose. Mean ferritin levels remained above 100 microg/L in the 3 groups during the whole study, and darbepoetin was well tolerated. This study has shown equivalent efficacy on Hb stability without the need for dosage increase in patients switched directly from subcutaneous rHuEPO to intravenous darbepoetin.
NASA Astrophysics Data System (ADS)
Krawiecki, A.
A multi-agent spin model for changes of prices in the stock market based on the Ising-like cellular automaton with interactions between traders randomly varying in time is investigated by means of Monte Carlo simulations. The structure of interactions has topology of a small-world network obtained from regular two-dimensional square lattices with various coordination numbers by randomly cutting and rewiring edges. Simulations of the model on regular lattices do not yield time series of logarithmic price returns with statistical properties comparable with the empirical ones. In contrast, in the case of networks with a certain degree of randomness for a wide range of parameters the time series of the logarithmic price returns exhibit intermittent bursting typical of volatility clustering. Also the tails of distributions of returns obey a power scaling law with exponents comparable to those obtained from the empirical data.
An Empirical Study about China: Gender Equity in Science Education.
ERIC Educational Resources Information Center
Wang, Jianjun; Staver, John R.
A data base representing a random sample of more than 10,000 grade 9 students in an SISS (Second IEA Science Study) Extended Study (SES), a key project supported by the China State Commission of Education in the late 1980s, was employed in this study to investigate gender equity in student science achievement in China. This empirical data analysis…
On the meaning of the weighted alternative free-response operating characteristic figure of merit.
Chakraborty, Dev P; Zhai, Xuetong
2016-05-01
The free-response receiver operating characteristic (FROC) method is being increasingly used to evaluate observer performance in search tasks. Data analysis requires definition of a figure of merit (FOM) quantifying performance. While a number of FOMs have been proposed, the recommended one, namely, the weighted alternative FROC (wAFROC) FOM, is not well understood. The aim of this work is to clarify the meaning of this FOM by relating it to the empirical area under a proposed wAFROC curve. The weighted wAFROC FOM is defined in terms of a quasi-Wilcoxon statistic that involves weights, coding the clinical importance, assigned to each lesion. A new wAFROC curve is proposed, the y-axis of which incorporates the weights, giving more credit for marking clinically important lesions, while the x-axis is identical to that of the AFROC curve. An expression is derived relating the area under the empirical wAFROC curve to the wAFROC FOM. Examples are presented with small numbers of cases showing how AFROC and wAFROC curves are affected by correct and incorrect decisions and how the corresponding FOMs credit or penalize these decisions. The wAFROC, AFROC, and inferred ROC FOMs were applied to three clinical data sets involving multiple reader FROC interpretations in different modalities. It is shown analytically that the area under the empirical wAFROC curve equals the wAFROC FOM. This theorem is the FROC analog of a well-known theorem developed in 1975 for ROC analysis, which gave meaning to a Wilcoxon statistic based ROC FOM. A similar equivalence applies between the area under the empirical AFROC curve and the AFROC FOM. The examples show explicitly that the wAFROC FOM gives equal importance to all diseased cases, regardless of the number of lesions, a desirable statistical property not shared by the AFROC FOM. Applications to the clinical data sets show that the wAFROC FOM yields results comparable to that using the AFROC FOM. The equivalence theorem gives meaning to the weighted AFROC FOM, namely, it is identical to the empirical area under weighted AFROC curve.
Empirical likelihood-based tests for stochastic ordering
BARMI, HAMMOU EL; MCKEAGUE, IAN W.
2013-01-01
This paper develops an empirical likelihood approach to testing for the presence of stochastic ordering among univariate distributions based on independent random samples from each distribution. The proposed test statistic is formed by integrating a localized empirical likelihood statistic with respect to the empirical distribution of the pooled sample. The asymptotic null distribution of this test statistic is found to have a simple distribution-free representation in terms of standard Brownian bridge processes. The approach is used to compare the lengths of rule of Roman Emperors over various historical periods, including the “decline and fall” phase of the empire. In a simulation study, the power of the proposed test is found to improve substantially upon that of a competing test due to El Barmi and Mukerjee. PMID:23874142
Staggered chiral random matrix theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Weitlauf, Julie C; Ruzek, Josef I; Westrup, Darrah A; Lee, Tina; Keller, Jennifer
2007-06-01
A growing body of empirical literature has systematically documented the reactions to research participation among participants in traumafocused research. To date, the available data has generally presented an optimistic picture regarding participants' ability to tolerate and even find benefit from their participation. However, this literature has been largely limited to cross-sectional designs. No extant literature has yet examined the perceptions of participants with psychiatric illness who are participating in randomized clinical trials (RCTs) designed to evaluate the efficacy or effectiveness of novel trauma treatments. The authors posit that negative experiences of, or poor reactions to, the research experience in the context of a trauma-focused RCT may elevate the risk of participation. Indeed, negative reactions may threaten to undermine the potential therapeutic gains of participants and promoting early drop out from the trial. Empirically assessing reactions to research participation at the pilot-study phase of a clinical trial can both provide investigators and IRB members alike with empirical evidence of some likely risks of participation. In turn, this information can be used to help shape the design and recruitment methodology of the full-scale trial. Using data from the pilot study of the Women's Self-Defense Project as a case illustration, we provide readers with concrete suggestions for empirically assessing participants' perceptions of risk involved in their participation in behaviorally oriented clinical trials.
Role of Statistical Random-Effects Linear Models in Personalized Medicine.
Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose
2012-03-01
Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization.
Pseudo-random tool paths for CNC sub-aperture polishing and other applications.
Dunn, Christina R; Walker, David D
2008-11-10
In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.
NASA Astrophysics Data System (ADS)
Dawid, Rys; Piotr, Jaskula
2018-05-01
Oversized heavy duty vehicles occur in traffic very rarely but they reach extremely high weights, even up to 800 tonne. The detrimental impact of these vehicles on pavement structure is much higher than in case of commercial vehicles that comprise typical traffic, thus it is necessary to assess the sensitivity of pavement structure to passage of oversized vehicles. The paper presents results of sample calculations of load equivalency factor of a heavy duty oversized vehicle with usage of mechanistic-empirical approach. The effects of pavement thickness, type of distress (cracking or rutting) and pavement condition (new or old with structural damage) were considered in the paper. Analysis revealed that a single pass of an 800 tonne oversized vehicle is equivalent to pass of up to 377 standard 100 kN axles. Load equivalency factor calculated for thin structures is almost 3 times lower than for thick structures, however, the damage effect caused by one pass of an oversized vehicle is higher in the case of thin structure. Bearing capacity of a pavement structure may be qualified as sufficient for passage of an oversized heavy duty vehicle when the measured deflection, for example in an FWD test, does not exceed the maximum deflections derived from mechanistic-empirical analysis. The paper presents sample calculation of maximum deflections which allow to consider passage of an oversized vehicle as safe over different pavement structures. The paper provides road administration with a practical tool which helps to decide whether to issue a permit of passage for a given oversized vehicle.
Relationship of the actual thick intraocular lens optic to the thin lens equivalent.
Holladay, J T; Maverick, K J
1998-09-01
To theoretically derive and empirically validate the relationship between the actual thick intraocular lens and the thin lens equivalent. Included in the study were 12 consecutive adult patients ranging in age from 54 to 84 years (mean +/- SD, 73.5 +/- 9.4 years) with best-corrected visual acuity better than 20/40 in each eye. Each patient had bilateral intraocular lens implants of the same style, placed in the same location (bag or sulcus) by the same surgeon. Preoperatively, axial length, keratometry, refraction, and vertex distance were measured. Postoperatively, keratometry, refraction, vertex distance, and the distance from the vertex of the cornea to the anterior vertex of the intraocular lens (AV(PC1)) were measured. Alternatively, the distance (AV(PC1)) was then back-calculated from the vergence formula used for intraocular lens power calculations. The average (+/-SD) of the absolute difference in the two methods was 0.23 +/- 0.18 mm, which would translate to approximately 0.46 diopters. There was no statistical difference between the measured and calculated values; the Pearson product-moment correlation coefficient from linear regression was 0.85 (r2 = .72, F = 56). The average intereye difference was -0.030 mm (SD, 0.141 mm; SEM, 0.043 mm) using the measurement method and +0.124 mm (SD, 0.412 mm; SEM, 0.124 mm) using the calculation method. The relationship between the actual thick intraocular lens and the thin lens equivalent has been determined theoretically and demonstrated empirically. This validation provides the manufacturer and surgeon additional confidence and utility for lens constants used in intraocular lens power calculations.
Accounting for Sampling Error in Genetic Eigenvalues Using Random Matrix Theory.
Sztepanacz, Jacqueline L; Blows, Mark W
2017-07-01
The distribution of genetic variance in multivariate phenotypes is characterized by the empirical spectral distribution of the eigenvalues of the genetic covariance matrix. Empirical estimates of genetic eigenvalues from random effects linear models are known to be overdispersed by sampling error, where large eigenvalues are biased upward, and small eigenvalues are biased downward. The overdispersion of the leading eigenvalues of sample covariance matrices have been demonstrated to conform to the Tracy-Widom (TW) distribution. Here we show that genetic eigenvalues estimated using restricted maximum likelihood (REML) in a multivariate random effects model with an unconstrained genetic covariance structure will also conform to the TW distribution after empirical scaling and centering. However, where estimation procedures using either REML or MCMC impose boundary constraints, the resulting genetic eigenvalues tend not be TW distributed. We show how using confidence intervals from sampling distributions of genetic eigenvalues without reference to the TW distribution is insufficient protection against mistaking sampling error as genetic variance, particularly when eigenvalues are small. By scaling such sampling distributions to the appropriate TW distribution, the critical value of the TW statistic can be used to determine if the magnitude of a genetic eigenvalue exceeds the sampling error for each eigenvalue in the spectral distribution of a given genetic covariance matrix. Copyright © 2017 by the Genetics Society of America.
Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families
2008-03-01
of Defense, or the United States Government . AFIT/GCS/ENG/08-12 Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families...time, United States policy strongly encourages the sale and transfer of some military equipment to foreign governments and makes it easier for...Proceedings of the International Conference on Availability, Reliability and Security, 2007. 14. McDonald, J. Todd and Alec Yasinsac. “Of unicorns and random
Sample Size Estimation in Cluster Randomized Educational Trials: An Empirical Bayes Approach
ERIC Educational Resources Information Center
Rotondi, Michael A.; Donner, Allan
2009-01-01
The educational field has now accumulated an extensive literature reporting on values of the intraclass correlation coefficient, a parameter essential to determining the required size of a planned cluster randomized trial. We propose here a simple simulation-based approach including all relevant information that can facilitate this task. An…
Increasing Parent Involvement in Youth HIV Prevention: A Randomized Caribbean Study
ERIC Educational Resources Information Center
Baptiste, Donna R.; Kapungu, Chisina; Miller, Steve; Crown, Laurel; Henry, David; Da Costa Martinez, Dona; Jo-Bennett, Karen
2009-01-01
This article presents preliminary findings of a randomized HIV prevention study in Trinidad and Tobago in the Caribbean. The study centers on a family HIV workshop aimed at strengthening parenting skills that are empirically linked to reducing adolescent HIV exposure and other sexual risks. These skills include parental monitoring; educating youth…
ERIC Educational Resources Information Center
Bloom, Howard S.; Richburg-Hayes, Lashawn; Black, Alison Rebeck
2007-01-01
This article examines how controlling statistically for baseline covariates, especially pretests, improves the precision of studies that randomize schools to measure the impacts of educational interventions on student achievement. Empirical findings from five urban school districts indicate that (1) pretests can reduce the number of randomized…
ERIC Educational Resources Information Center
Goh, David S.
1979-01-01
The advantages of using psychometric thoery to design short forms of intelligence tests are demonstrated by comparing such usage to a systematic random procedure that has previously been used. The Wechsler Intelligence Scale for Children Revised (WISC-R) Short Form is presented as an example. (JKS)
Cognitive Behavioral Principles within Group Mentoring: A Randomized Pilot Study
ERIC Educational Resources Information Center
Jent, Jason F.; Niec, Larissa N.
2009-01-01
This study evaluated the effectiveness of a group mentoring program that included components of empirically supported mentoring and cognitive behavioral techniques for children served at a community mental health center. Eighty-six 8- to 12-year-old children were randomly assigned to either group mentoring or a wait-list control group. Group…
A randomized trial of teaching clinical skills using virtual and live standardized patients.
Triola, M; Feldman, H; Kalet, A L; Zabar, S; Kachur, E K; Gillespie, C; Anderson, M; Griesser, C; Lipkin, M
2006-05-01
We developed computer-based virtual patient (VP) cases to complement an interactive continuing medical education (CME) course that emphasizes skills practice using standardized patients (SP). Virtual patient simulations have the significant advantages of requiring fewer personnel and resources, being accessible at any time, and being highly standardized. Little is known about the educational effectiveness of these new resources. We conducted a randomized trial to assess the educational effectiveness of VPs and SPs in teaching clinical skills. To determine the effectiveness of VP cases when compared with live SP cases in improving clinical skills and knowledge. Randomized trial. Fifty-five health care providers (registered nurses 45%, physicians 15%, other provider types 40%) who attended a CME program. Participants were randomized to receive either 4 live cases (n=32) or 2 live and 2 virtual cases (n=23). Other aspects of the course were identical for both groups. Participants in both groups were equivalent with respect to pre-post workshop improvement in comfort level (P=.66) and preparedness to respond (P=.61), to screen (P=.79), and to care (P=.055) for patients using the skills taught. There was no difference in subjective ratings of effectiveness of the VPs and SPs by participants who experienced both (P=.79). Improvement in diagnostic abilities were equivalent in groups who experienced cases either live or virtually. Improvements in performance and diagnostic ability were equivalent between the groups and participants rated VP and SP cases equally. Including well-designed VPs has a potentially powerful and efficient place in clinical skills training for practicing health care workers.
Equivalence of Szegedy's and coined quantum walks
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2017-09-01
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
Ordinal optimization and its application to complex deterministic problems
NASA Astrophysics Data System (ADS)
Yang, Mike Shang-Yu
1998-10-01
We present in this thesis a new perspective to approach a general class of optimization problems characterized by large deterministic complexities. Many problems of real-world concerns today lack analyzable structures and almost always involve high level of difficulties and complexities in the evaluation process. Advances in computer technology allow us to build computer models to simulate the evaluation process through numerical means, but the burden of high complexities remains to tax the simulation with an exorbitant computing cost for each evaluation. Such a resource requirement makes local fine-tuning of a known design difficult under most circumstances, let alone global optimization. Kolmogorov equivalence of complexity and randomness in computation theory is introduced to resolve this difficulty by converting the complex deterministic model to a stochastic pseudo-model composed of a simple deterministic component and a white-noise like stochastic term. The resulting randomness is then dealt with by a noise-robust approach called Ordinal Optimization. Ordinal Optimization utilizes Goal Softening and Ordinal Comparison to achieve an efficient and quantifiable selection of designs in the initial search process. The approach is substantiated by a case study in the turbine blade manufacturing process. The problem involves the optimization of the manufacturing process of the integrally bladed rotor in the turbine engines of U.S. Air Force fighter jets. The intertwining interactions among the material, thermomechanical, and geometrical changes makes the current FEM approach prohibitively uneconomical in the optimization process. The generalized OO approach to complex deterministic problems is applied here with great success. Empirical results indicate a saving of nearly 95% in the computing cost.
Leuchter, Russia Ha-Vinh; Gui, Laura; Poncet, Antoine; Hagmann, Cornelia; Lodygensky, Gregory Anton; Martin, Ernst; Koller, Brigitte; Darqué, Alexandra; Bucher, Hans Ulrich; Hüppi, Petra Susan
2014-08-27
Premature infants are at risk of developing encephalopathy of prematurity, which is associated with long-term neurodevelopmental delay. Erythropoietin was shown to be neuroprotective in experimental and retrospective clinical studies. To determine if there is an association between early high-dose recombinant human erythropoietin treatment in preterm infants and biomarkers of encephalopathy of prematurity on magnetic resonance imaging (MRI) at term-equivalent age. A total of 495 infants were included in a randomized, double-blind, placebo-controlled study conducted in Switzerland between 2005 and 2012. In a nonrandomized subset of 165 infants (n=77 erythropoietin; n=88 placebo), brain abnormalities were evaluated on MRI acquired at term-equivalent age. Participants were randomly assigned to receive recombinant human erythropoietin (3000 IU/kg; n=256) or placebo (n=239) intravenously before 3 hours, at 12 to 18 hours, and at 36 to 42 hours after birth. The primary outcome of the trial, neurodevelopment at 24 months, has not yet been assessed. The secondary outcome, white matter disease of the preterm infant, was semiquantitatively assessed from MRI at term-equivalent age based on an established scoring method. The resulting white matter injury and gray matter injury scores were categorized as normal or abnormal according to thresholds established in the literature by correlation with neurodevelopmental outcome. At term-equivalent age, compared with untreated controls, fewer infants treated with recombinant human erythropoietin had abnormal scores for white matter injury (22% [17/77] vs 36% [32/88]; adjusted risk ratio [RR], 0.58; 95% CI, 0.35-0.96), white matter signal intensity (3% [2/77] vs 11% [10/88]; adjusted RR, 0.20; 95% CI, 0.05-0.90), periventricular white matter loss (18% [14/77] vs 33% [29/88]; adjusted RR, 0.53; 95% CI, 0.30-0.92), and gray matter injury (7% [5/77] vs 19% [17/88]; adjusted RR, 0.34; 95% CI, 0.13-0.89). In an analysis of secondary outcomes of a randomized clinical trial of preterm infants, high-dose erythropoietin treatment within 42 hours after birth was associated with a reduced risk of brain injury on MRI. These findings require assessment in a randomized trial designed primarily to assess this outcome as well as investigation of the association with neurodevelopmental outcomes. clinicaltrials.gov Identifier: NCT00413946.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
ERIC Educational Resources Information Center
Vermillion, James E.
The presence of artifactual bias in analysis of covariance (ANCOVA) and in matching nonequivalent control group (NECG) designs was empirically investigated. The data set was obtained from a study of the effects of a television program on children from three day care centers in Mexico in which the subjects had been randomly selected within centers.…
Carey, M E; Mandalia, P K; Daly, H; Gray, L J; Hale, R; Martin Stacey, L; Taub, N; Skinner, T C; Stone, M; Heller, S; Khunti, K; Davies, M J
2014-11-01
To develop and test a format of delivery of diabetes self-management education by paired professional and lay educators. We conducted an equivalence trial with non-randomized participant allocation to a Diabetes Education and Self Management for Ongoing and Newly Diagnosed Type 2 diabetes (DESMOND) course, delivered in the standard format by two trained healthcare professional educators (to the control group) or by one trained lay educator and one professional educator (to the intervention group). A total of 260 people with Type 2 diabetes diagnosed within the previous 12 months were referred for self-management education as part of routine care and attended either a control or intervention format DESMOND course. The primary outcome measure was change in illness coherence score (derived from the Diabetes Illness Perception Questionnaire-Revised) between baseline and 4 months after attending education sessions. Secondary outcome measures included change in HbA1c level. The trial was conducted in four primary care organizations across England and Scotland. The 95% CI for the between-group difference in positive change in coherence scores was within the pre-set limits of equivalence (difference = 0.22, 95% CI 1.07 to 1.52). Equivalent changes related to secondary outcome measures were also observed, including equivalent reductions in HbA1c levels. Diabetes education delivered jointly by a trained lay person and a healthcare professional educator with the same educator role can provide equivalent patient benefits. This could provide a method that increases capacity, maintains quality and is cost-effective, while increasing access to self-management education. © 2014 The Authors. Diabetic Medicine © 2014 Diabetes UK.
Requirements for Initiation and Sustained Propagation of Fuel-Air Explosives
1983-06-01
of single-head spin gives the limiting composition for stable propagation of a detonation wave. I. INTRODUCTION which the effects of blockage ratio...Ihu. Dateanle;otd) equivalent chemical times derived from it) provide a much more useful parameter as input to the required theories and empirical...dimensional steady state equilibrium theory (hence static). Experience shows that the dynamic parameters reflect more intimately the detonation properties
ERIC Educational Resources Information Center
Shapiro, Amy
2009-01-01
Student evaluations of a large General Psychology course indicate that students enjoy the class a great deal, yet attendance is low. An experiment was conducted to evaluate a personal response system as a solution. Attendance rose by 30% as compared to extra credit as an inducement, but was equivalent to offering pop quizzes. Performance on test…
ERIC Educational Resources Information Center
Rice, Mabel L.; Redmond, Sean M.; Hoffman, Lesa
2006-01-01
Purpose: Although mean length of utterance (MLU) is a useful benchmark in studies of children with specific language impairment (SLI), some empirical and interpretive issues are unresolved. The authors report on 2 studies examining, respectively, the concurrent validity and temporal stability of MLU equivalency between children with SLI and…
SU-F-T-408: On the Determination of Equivalent Squares for Rectangular Small MV Photon Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauer, OA; Wegener, S; Exner, F
Purpose: It is common practice to tabulate dosimetric data like output factors, scatter factors and detector signal correction factors for a set of square fields. In order to get the data for an arbitrary field, it is mapped to an equivalent square, having the same scatter as the field of interest. For rectangular fields both, tabulated data and empiric formula exist. We tested the applicability of such rules for very small fields. Methods: Using the Monte-Carlo method (EGSnrc-doseRZ), the dose to a point in 10cm depth in water was calculated for cylindrical impinging fluence distributions. Radii were from 0.5mm tomore » 11.5mm with 1mm thickness of the rings. Different photon energies were investigated. With these data a matrix was constructed assigning the amount of dose to the field center to each matrix element. By summing up the elements belonging to a certain field, the dose for an arbitrary point in 10cm depth could be determined. This was done for rectangles up to 21mm side length. Comparing the dose to square field results, equivalent squares could be assigned. The results were compared to using the geometrical mean and the 4Xperimeter/area rule. Results: For side length differences less than 2mm, the difference between all methods was in general less than 0.2mm. For more elongated fields, relevant differences of more than 1mm and up to 3mm for the fields investigated occurred. The mean square side length calculated from both empiric formulas fitted much better, deviating hardly more than 1mm and for the very elongated fields only. Conclusion: For small rectangular photon fields, deviating only moderately from square both investigated empiric methods are sufficiently accurate. As the deviations often differ regarding their sign, using the mean improves the accuracy and the useable elongation range. For ratios larger than 2, Monte-Carlo generated data are recommended. SW is funded by Deutsche Forschungsgemeinschaft (SA481/10-1)« less
Meuldijk, D; Carlier, I V E; van Vliet, I M; van Veen, T; Wolterbeek, R; van Hemert, A M; Zitman, F G
2016-03-01
Depressive and anxiety disorders contribute to a high disease burden. This paper investigates whether concise formats of cognitive behavioral- and/or pharmacotherapy are equivalent with longer standard care in the treatment of depressive and/or anxiety disorders in secondary mental health care. A pragmatic randomized controlled equivalence trial was conducted at five Dutch outpatient Mental Healthcare Centers (MHCs) of the Regional Mental Health Provider (RMHP) 'Rivierduinen'. Patients (aged 18-65 years) with a mild to moderate anxiety and/or depressive disorder, were randomly allocated to concise or standard care. Data were collected at baseline, 3, 6 and 12 months by Routine Outcome Monitoring (ROM). Primary outcomes were the Brief Symptom Inventory (BSI) and the Web Screening Questionnaire (WSQ). We used Generalized Estimating Equations (GEE) to assess outcomes. Between March 2010 and December 2012, 182 patients, were enrolled (n=89 standard care; n=93 concise care). Both intention-to-treat and per-protocol analyses demonstrated equivalence of concise care and standard care at all time points. Severity of illness reduced, and both treatments improved patient's general health status and subdomains of quality of life. Moreover, in concise care, the beneficial effects started earlier. Concise care has the potential to be a feasible and promising alternative to longer standard secondary mental health care in the treatment of outpatients with a mild to moderate depressive and/or anxiety disorder. For future research, we recommend adhering more strictly to the concise treatment protocols to further explore the beneficial effects of the concise treatment. The study is registered in the Netherlands Trial Register, number NTR2590. Clinicaltrials.gov identifier: NCT01643642. Copyright © 2015 Elsevier Inc. All rights reserved.
Paint-only is equivalent to scrub-and-paint in preoperative preparation of abdominal surgery sites.
Ellenhorn, Joshua D I; Smith, David D; Schwarz, Roderich E; Kawachi, Mark H; Wilson, Timothy G; McGonigle, Kathryn F; Wagman, Lawrence D; Paz, I Benjamin
2005-11-01
Antiseptic preoperative skin site preparation is used to prepare the operative site before making a surgical incision. The goal of this preparation is a reduction in postoperative wound infection. The most straightforward technique necessary to achieve this goal remains controversial. A prospective randomized trial was designed to prove equivalency for two commonly used techniques of surgical skin site preparation. Two hundred thirty-four patients undergoing nonlaparoscopic abdominal operations were consented for the trial. Exclusion criteria included presence of active infection at the time of operation, neutropenia, history of skin reaction to iodine, or anticipated insertion of prosthetic material at the time of operation. Patients were randomized to receive either a vigorous 5-minute scrub with povidone-iodine soap, followed by absorption with a sterile towel, and a paint with aqueous povidone-iodine or surgical site preparation with a povidone-iodine paint only. The primary end point of the study was wound infection rate at 30 days, defined as presence of clinical signs of infection requiring therapeutic intervention. Patients randomized to the scrub-and-paint arm (n = 115) and the paint-only arm (n = 119) matched at baseline with respect to age, comorbidity, wound classification, mean operative time, placement of drains, prophylactic antibiotic use, and surgical procedure (all p > 0.09). Wound infection occurred in 12 (10%) scrub-and-paint patients, and 12 (10%) paint-only patients. Based on our predefined equivalency parameters, we conclude equivalence of infection rates between the two preparations. Preoperative preparation of the abdomen with a scrub with povidone-iodine soap followed by a paint with aqueous povidone-iodine can be abandoned in favor of a paint with aqueous povidone-iodine alone. This change will result in reductions in operative times and costs.
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1996-02-01
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting from the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1995-12-31
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting form the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
Mostafanezhad, Isar; Boric-Lubecke, Olga; Lubecke, Victor; Mandic, Danilo P
2009-01-01
Empirical Mode Decomposition has been shown effective in the analysis of non-stationary and non-linear signals. As an application in wireless life signs monitoring in this paper we use this method in conditioning the signals obtained from the Doppler device. Random physical movements, fidgeting, of the human subject during a measurement can fall on the same frequency of the heart or respiration rate and interfere with the measurement. It will be shown how Empirical Mode Decomposition can break the radar signal down into its components and help separate and remove the fidgeting interference.
Random diffusion and leverage effect in financial markets.
Perelló, Josep; Masoliver, Jaume
2003-03-01
We prove that Brownian market models with random diffusion coefficients provide an exact measure of the leverage effect [J-P. Bouchaud et al., Phys. Rev. Lett. 87, 228701 (2001)]. This empirical fact asserts that past returns are anticorrelated with future diffusion coefficient. Several models with random diffusion have been suggested but without a quantitative study of the leverage effect. Our analysis lets us to fully estimate all parameters involved and allows a deeper study of correlated random diffusion models that may have practical implications for many aspects of financial markets.
Lowe, Winsor H; McPeek, Mark A
2014-08-01
Dispersal is difficult to quantify and often treated as purely stochastic and extrinsically controlled. Consequently, there remains uncertainty about how individual traits mediate dispersal and its ecological effects. Addressing this uncertainty is crucial for distinguishing neutral versus non-neutral drivers of community assembly. Neutral theory assumes that dispersal is stochastic and equivalent among species. This assumption can be rejected on principle, but common research approaches tacitly support the 'neutral dispersal' assumption. Theory and empirical evidence that dispersal traits are under selection should be broadly integrated in community-level research, stimulating greater scrutiny of this assumption. A tighter empirical connection between the ecological and evolutionary forces that shape dispersal will enable richer understanding of this fundamental process and its role in community assembly. Copyright © 2014 Elsevier Ltd. All rights reserved.
Santos, Josilene C; Tomal, Alessandra; Mariano, Leandro; Costa, Paulo R
2015-06-01
The aim of this study was to estimate barite mortar attenuation curves using X-ray spectra weighted by a workload distribution. A semi-empirical model was used for the evaluation of transmission properties of this material. Since ambient dose equivalent, H(⁎)(10), is the radiation quantity adopted by IAEA for dose assessment, the variation of the H(⁎)(10) as a function of barite mortar thickness was calculated using primary experimental spectra. A CdTe detector was used for the measurement of these spectra. The resulting spectra were adopted for estimating the optimized thickness of protective barrier needed for shielding an area in an X-ray imaging facility. Copyright © 2015 Elsevier Ltd. All rights reserved.
ERIC Educational Resources Information Center
Rapee, Ronald M.; Abbott, Maree J.; Lyneham, Heidi J.
2006-01-01
The current trial examined the value of modifying empirically validated treatment for childhood anxiety for application via written materials for parents of anxious children. Two hundred sixty-seven clinically anxious children ages 6-12 years and their parents were randomly allocated to standard group treatment, wait list, or a bibliotherapy…
ERIC Educational Resources Information Center
Paton, David
2006-01-01
Rational choice models of teenage sexual behaviour lead to radically different predictions than do models that assume such behaviour is random. Existing empirical evidence has not been able to distinguish conclusively between these competing models. I use regional data from England between 1998 and 2001 to examine the impact of recent increases in…
An Empirical Comparison of Randomized Control Trials and Regression Discontinuity Estimations
ERIC Educational Resources Information Center
Barrera-Osorio, Felipe; Filmer, Deon; McIntyre, Joe
2014-01-01
Randomized controlled trials (RCTs) and regression discontinuity (RD) studies both provide estimates of causal effects. A major difference between the two is that RD only estimates local average treatment effects (LATE) near the cutoff point of the forcing variable. This has been cited as a drawback to RD designs (Cook & Wong, 2008).…
Selection of Variables in Cluster Analysis: An Empirical Comparison of Eight Procedures
ERIC Educational Resources Information Center
Steinley, Douglas; Brusco, Michael J.
2008-01-01
Eight different variable selection techniques for model-based and non-model-based clustering are evaluated across a wide range of cluster structures. It is shown that several methods have difficulties when non-informative variables (i.e., random noise) are included in the model. Furthermore, the distribution of the random noise greatly impacts the…
ERIC Educational Resources Information Center
Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.
2006-01-01
The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…
Role of Statistical Random-Effects Linear Models in Personalized Medicine
Diaz, Francisco J; Yeh, Hung-Wen; de Leon, Jose
2012-01-01
Some empirical studies and recent developments in pharmacokinetic theory suggest that statistical random-effects linear models are valuable tools that allow describing simultaneously patient populations as a whole and patients as individuals. This remarkable characteristic indicates that these models may be useful in the development of personalized medicine, which aims at finding treatment regimes that are appropriate for particular patients, not just appropriate for the average patient. In fact, published developments show that random-effects linear models may provide a solid theoretical framework for drug dosage individualization in chronic diseases. In particular, individualized dosages computed with these models by means of an empirical Bayesian approach may produce better results than dosages computed with some methods routinely used in therapeutic drug monitoring. This is further supported by published empirical and theoretical findings that show that random effects linear models may provide accurate representations of phase III and IV steady-state pharmacokinetic data, and may be useful for dosage computations. These models have applications in the design of clinical algorithms for drug dosage individualization in chronic diseases; in the computation of dose correction factors; computation of the minimum number of blood samples from a patient that are necessary for calculating an optimal individualized drug dosage in therapeutic drug monitoring; measure of the clinical importance of clinical, demographic, environmental or genetic covariates; study of drug-drug interactions in clinical settings; the implementation of computational tools for web-site-based evidence farming; design of pharmacogenomic studies; and in the development of a pharmacological theory of dosage individualization. PMID:23467392
ERIC Educational Resources Information Center
Gün, Mesut
2016-01-01
The purpose of this empirical study is to determine how and to what extent the use of animations impacts auditory acquisition, one of the key learning fields in 6th grade grammar, as measured by students' academic success and completion rates. By using a pre-test and post-test design, this empirical study randomly divided a group of Turkish 6th…
How to Quantify Deterministic and Random Influences on the Statistics of the Foreign Exchange Market
NASA Astrophysics Data System (ADS)
Friedrich, R.; Peinke, J.; Renner, Ch.
2000-05-01
It is shown that price changes of the U.S. dollar-German mark exchange rates upon different delay times can be regarded as a stochastic Marcovian process. Furthermore, we show how Kramers-Moyal coefficients can be estimated from the empirical data. Finally, we present an explicit Fokker-Planck equation which models very precisely the empirical probability distributions, in particular, their non-Gaussian heavy tails.
Bays, Harold E; Chen, Erluo; Tomassini, Joanne E; McPeters, Gail; Polis, Adam B; Triscari, Joseph
2015-04-01
Co-administration of ezetimibe with atorvastatin is a generally well-tolerated treatment option that reduces LDL-C levels and improves other lipids with greater efficacy than doubling the atorvastatin dose. The objective of the study was to demonstrate the equivalent lipid-modifying efficacy of fixed-dose combination (FDC) ezetimibe/atorvastatin compared with the component agents co-administered individually in support of regulatory filing. Two randomized, 6-week, double-blind cross-over trials compared the lipid-modifying efficacy of ezetimibe/atorvastatin 10/20 mg (n = 353) or 10/40 mg (n = 280) vs. separate co-administration of ezetimibe 10 mg plus atorvastatin 20 mg (n = 346) or 40 mg (n = 280), respectively, in hypercholesterolemic patients. Percent changes from baseline in LDL-C (primary endpoint) and other lipids (secondary endpoints) were assessed by analysis of covariance; triglycerides were evaluated by longitudinal-data analysis. Expected differences between FDC and the corresponding co-administered doses were predicted from a dose-response relationship model; sample size was estimated given the expected difference and equivalence margins (±4%). LDL-C-lowering equivalence was based on 97.5% expanded confidence intervals (CI) for the difference contained within the margins; equivalence margins for other lipids were not prespecified. Ezetimibe/atorvastatin FDC 10/20 mg was equivalent to co-administered ezetimibe+atorvastatin 20 mg in reducing LDL-C levels (54.0% vs. 53.8%) as was FDC 10/40 mg and ezetimibe+atorvastatin 40 mg (58.9% vs. 58.7%), as predicted by the model. Changes in other lipids were consistent with equivalence (97.5% expanded CIs <±3%, included 0); triglyceride changes varied more. All treatments were generally well tolerated. Hypercholesterolemic patients administered ezetimibe/atorvastatin 10/20 and 10/40 mg FDC had equivalent LDL-C lowering. This FDC formulation proved to be an efficacious and generally well-tolerated lipid-lowering therapy. © 2014 Société Française de Pharmacologie et de Thérapeutique.
ERIC Educational Resources Information Center
Hansen, Kristine; Reeve, Suzanne; Gonzalez, Jennifer; Sudweeks, Richard R.; Hatch, Gary L.; Esplin, Patricia; Bradshaw, William S.
2006-01-01
This study was conducted to obtain empirical data to inform policy decisions about exempting incoming students from a first-year composition (FYC) course on the basis of Advanced Placement (AP) English exam scores. It examined the effect of avoiding first-year writing on the writing abilities of sophomore undergraduates. Two three-page writing…
Butow, Phyllis N; Turner, Jane; Gilchrist, Jemma; Sharpe, Louise; Smith, Allan Ben; Fardell, Joanna E; Tesson, Stephanie; O'Connell, Rachel; Girgis, Afaf; Gebski, Val J; Asher, Rebecca; Mihalopoulos, Cathrine; Bell, Melanie L; Zola, Karina Grunewald; Beith, Jane; Thewes, Belinda
2017-12-20
Purpose Fear of cancer recurrence (FCR) is prevalent, distressing, and long lasting. This study evaluated the impact of a theoretically/empirically based intervention (ConquerFear) on FCR. Methods Eligible survivors had curable breast or colorectal cancer or melanoma, had completed treatment (not including endocrine therapy) 2 months to 5 years previously, were age > 18 years, and had scores above the clinical cutoff on the FCR Inventory (FCRI) severity subscale at screening. Participants were randomly assigned at a one-to-one ratio to either five face-to-face sessions of ConquerFear (attention training, metacognitions, acceptance/mindfulness, screening behavior, and values-based goal setting) or an attention control (Taking-it-Easy relaxation therapy). Participants completed questionnaires at baseline (T0), immediately post-therapy (T1), and 3 (T2) and 6 months (T3) later. The primary outcome was FCRI total score. Results Of 704 potentially eligible survivors from 17 sites and two online databases, 533 were contactable, of whom 222 (42%) consented; 121 were randomly assigned to intervention and 101 to control. Study arms were equivalent at baseline on all measured characteristics. ConquerFear participants had clinically and statistically greater improvements than control participants from T0 to T1 on FCRI total ( P < .001) and severity subscale scores ( P = .001), which were maintained at T2 ( P = .017 and P = .023, respectively) and, for FCRI total only, at T3 ( P = .018), and from T0 to T1 on three FCRI subscales (coping, psychological distress, and triggers) as well as in general anxiety, cancer-specific distress (total), and mental quality of life and metacognitions (total). Differences in FCRI psychological distress and cancer-specific distress (total) remained significantly different at T3. Conclusion This randomized trial demonstrated efficacy of ConquerFear compared with attention control (Taking-it-Easy) in reduction of FCRI total scores immediately post-therapy and 3 and 6 months later and in many secondary outcomes immediately post-therapy. Cancer-specific distress (total) remained more improved at 3- and 6-month follow-up.
Prabhu, Malavika; Clapp, Mark A; McQuaid-Hanson, Emily; Ona, Samsiya; OʼDonnell, Taylor; James, Kaitlyn; Bateman, Brian T; Wylie, Blair J; Barth, William H
2018-07-01
To evaluate whether a liposomal bupivacaine incisional block decreases postoperative pain and represents an opioid-minimizing strategy after scheduled cesarean delivery. In a single-blind, randomized controlled trial among opioid-naive women undergoing cesarean delivery, liposomal bupivacaine or placebo was infiltrated into the fascia and skin at the surgical site, before fascial closure. Using an 11-point numeric rating scale, the primary outcome was pain score with movement at 48 hours postoperatively. A sample size of 40 women per group was needed to detect a 1.5-point reduction in pain score in the intervention group. Pain scores and opioid consumption, in oral morphine milligram equivalents, at 48 hours postoperatively were summarized as medians (interquartile range) and compared using the Wilcoxon rank-sum test. Between March and September 2017, 249 women were screened, 103 women enrolled, and 80 women were randomized. One woman in the liposomal bupivacaine group was excluded after randomization as a result of a vertical skin incision, leaving 39 patients in the liposomal bupivacaine group and 40 in the placebo group. Baseline characteristics between groups were similar. The median (interquartile range) pain score with movement at 48 hours postoperatively was 4 (2-5) in the liposomal bupivacaine group and 3.5 (2-5.5) in the placebo group (P=.72). The median (interquartile range) opioid use was 37.5 (7.5-60) morphine milligram equivalents in the liposomal bupivacaine group and 37.5 (15-75) morphine milligram equivalents in the placebo group during the first 48 hours postoperatively (P=.44). Compared with placebo, a liposomal bupivacaine incisional block at the time of cesarean delivery resulted in similar postoperative pain scores in the first 48 hours postoperatively. ClinicalTrials.gov, NCT02959996.
Wik, Lars; Olsen, Jan-Aage; Persse, David; Sterz, Fritz; Lozano, Michael; Brouwer, Marc A; Westfall, Mark; Souders, Chris M; Malzer, Reinhard; van Grunsven, Pierre M; Travis, David T; Whitehead, Anne; Herken, Ulrich R; Lerner, E Brooke
2014-06-01
To compare integrated automated load distributing band CPR (iA-CPR) with high-quality manual CPR (M-CPR) to determine equivalence, superiority, or inferiority in survival to hospital discharge. Between March 5, 2009 and January 11, 2011 a randomized, unblinded, controlled group sequential trial of adult out-of-hospital cardiac arrests of presumed cardiac origin was conducted at three US and two European sites. After EMS providers initiated manual compressions patients were randomized to receive either iA-CPR or M-CPR. Patient follow-up was until all patients were discharged alive or died. The primary outcome, survival to hospital discharge, was analyzed adjusting for covariates, (age, witnessed arrest, initial cardiac rhythm, enrollment site) and interim analyses. CPR quality and protocol adherence were monitored (CPR fraction) electronically throughout the trial. Of 4753 randomized patients, 522 (11.0%) met post enrollment exclusion criteria. Therefore, 2099 (49.6%) received iA-CPR and 2132 (50.4%) M-CPR. Sustained ROSC (emergency department admittance), 24h survival and hospital discharge (unknown for 12 cases) for iA-CPR compared to M-CPR were 600 (28.6%) vs. 689 (32.3%), 456 (21.8%) vs. 532 (25.0%), 196 (9.4%) vs. 233 (11.0%) patients, respectively. The adjusted odds ratio of survival to hospital discharge for iA-CPR compared to M-CPR, was 1.06 (95% CI 0.83-1.37), meeting the criteria for equivalence. The 20 min CPR fraction was 80.4% for iA-CPR and 80.2% for M-CPR. Compared to high-quality M-CPR, iA-CPR resulted in statistically equivalent survival to hospital discharge. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael
2014-01-01
Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.
Spark Ignition of Monodisperse Fuel Sprays. Ph.D. Thesis
NASA Technical Reports Server (NTRS)
Danis, Allen M.; Cernansky, Nicholas P.; Namer, Izak
1987-01-01
A study of spark ignition energy requirements was conducted with a monodisperse spray system allowing independent control of droplet size, equivalent ratio, and fuel type. Minimum ignition energies were measured for n-heptane and methanol sprays characterized at the spark gap in terms of droplet diameter, equivalence ratio (number density) and extent of prevaporization. In addition to sprays, minimum ignition energies were measured for completely prevaporized mixtures of the same fuels over a range of equivalence ratios to provide data at the lower limit of droplet size. Results showed that spray ignition was enhanced with decreasing droplet size and increasing equivalence ratio over the ranges of the parameters studied. By comparing spray and prevaporized ignition results, the existence of an optimum droplet size for ignition was indicated for both fuels. Fuel volatility was seen to be a critical factor in spray ignition. The spray ignition results were analyzed using two different empirical ignition models for quiescent mixtures. Both models accurately predicted the experimental ignition energies for the majority of the spray conditions. Spray ignition was observed to be probabilistic in nature, and ignition was quantified in terms of an ignition frequency for a given spark energy. A model was developed to predict ignition frequencies based on the variation in spark energy and equivalence ratio in the spark gap. The resulting ignition frequency simulations were nearly identical to the experimentally observed values.
Design and verification of large-moment transmitter loops for geophysical applications
NASA Astrophysics Data System (ADS)
Sternberg, Ben K.; Dvorak, Steven L.; Feng, Wanjie
2017-01-01
In this paper we discuss the modeling, design and verification of large-moment transmitter (TX) loops for geophysical applications. We first develop two equivalent circuit models for TX loops. We show that the equivalent inductance can be predicted using one of two empirical formulas. The stray capacitance of the loop is then calculated using the measured self-resonant frequency and the loop inductance. We model the losses associated with both the skin effect and the dissipation factor in both of these equivalent circuits. We find that the two equivalent circuit models produce the same results provided that the dissipation factor is small. Next we compare the measured input impedances for three TX loops that were constructed with different wire configurations with the equivalent circuit model. We found excellent agreement between the measured and simulated results after adjusting the dissipation factor. Since the skin effect and dissipation factor yield good agreement with measurements, the proximity effect is negligible in the three TX loops that we tested. We found that the effects of the dissipation factor dominated those of the skin effect when the wires were relatively close together. When the wires were widely separated, then the skin effect was the dominant loss mechanism. We also found that loops with wider wire separations exhibited higher self-resonant frequencies and better high-frequency performance.
TRANSFER OF AVERSIVE RESPONDENT ELICITATION IN ACCORDANCE WITH EQUIVALENCE RELATIONS
Valverde, Miguel RodrÍguez; Luciano, Carmen; Barnes-Holmes, Dermot
2009-01-01
The present study investigates the transfer of aversively conditioned respondent elicitation through equivalence classes, using skin conductance as the measure of conditioning. The first experiment is an attempt to replicate Experiment 1 in Dougher, Augustson, Markham, Greenway, and Wulfert (1994), with different temporal parameters in the aversive conditioning procedure employed. Match-to-sample procedures were used to teach 17 participants two 4-member equivalence classes. Then, one member of one class was paired with electric shock and one member of the other class was presented without shock. The remaining stimuli from each class were presented in transfer tests. Unlike the findings in the original study, transfer of conditioning was not achieved. In Experiment 2, similar procedures were used with 30 participants, although several modifications were introduced (formation of five-member classes, direct conditioning with several elements of each class, random sequences of stimulus presentation in transfer tests, reversal in aversive conditioning contingencies). More than 80% of participants who had shown differential conditioning also showed the transfer of function effect. Moreover, this effect was replicated within subjects for 3 participants. This is the first demonstration of the transfer of aversive respondent elicitation through stimulus equivalence classes with the presentation of transfer test trials in random order. The latter prevents the possibility that transfer effects are an artefact of transfer test presentation order. PMID:20119523
ERIC Educational Resources Information Center
Hedeker, Donald; And Others
1996-01-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example, M. Fishbein and I. Ajzen's theory of reasoned action is examined. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate individual influences…
Lucky in Life, Unlucky in Love? The Effect of Random Income Shocks on Marriage and Divorce
ERIC Educational Resources Information Center
Hankins, Scott; Hoekstra, Mark
2011-01-01
Economists have long been interested in the extent to which economic resources affect decisions to marry and divorce. However, this issue has been difficult to address empirically due to a lack of exogenous income shocks. We overcome this problem by exploiting the randomness of the Florida Lottery and comparing recipients of large prizes to those…
Efficacy of Creative Clay Work for Reducing Negative Mood: A Randomized Controlled Trial
ERIC Educational Resources Information Center
Kimport, Elizabeth R.; Robbins, Steven J.
2012-01-01
Clay work has long been used in art therapy to achieve therapeutic goals. However, little empirical evidence exists to document the efficacy of such work. The present study randomly assigned 102 adult participants to one of four conditions following induction of a negative mood: (a) handling clay with instructions to create a pinch pot, (b)…
Relationship of field and LiDAR estimates of forest canopy cover with snow accumulation and melt
Mariana Dobre; William J. Elliot; Joan Q. Wu; Timothy E. Link; Brandon Glaza; Theresa B. Jain; Andrew T. Hudak
2012-01-01
At the Priest River Experimental Forest in northern Idaho, USA, snow water equivalent (SWE) was recorded over a period of six years on random, equally-spaced plots in ~4.5 ha small watersheds (n=10). Two watersheds were selected as controls and eight as treatments, with two watersheds randomly assigned per treatment as follows: harvest (2007) followed by mastication (...
Stewart, Rebecca E.; Chambless, Dianne L.
2010-01-01
It has been repeatedly demonstrated that clinicians rely more on clinical judgment than on research findings. We hypothesized that psychologists in practice might be more open to adopting empirically supported treatments (ESTs) if outcome results were presented with a case study. Psychologists in private practice (N = 742) were randomly assigned to receive a research review of data from randomized controlled trials of cognitive-behavioral treatment (CBT) and medication for bulimia, a case study of CBT for a fictional patient with bulimia, or both. Results indicated that the inclusion of case examples renders ESTs more compelling and interests clinicians in gaining training. Despite these participants’ training in statistics, the inclusion of the statistical information had no influence on attitudes or training willingness beyond that of the anecdotal case information. PMID:19899142
NASA Astrophysics Data System (ADS)
Buldyrev, S.; Davis, A.; Marshak, A.; Stanley, H. E.
2001-12-01
Two-stream radiation transport models, as used in all current GCM parameterization schemes, are mathematically equivalent to ``standard'' diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. The space/time spread (technically, the Green function) of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows directly from first principles (the radiative transfer equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the ``1-g'' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as ``anomalous'' diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics literature to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of Lévy/anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM radiation parameterization.
Effects of Heterogeneous Social Interactions on Flocking Dynamics
NASA Astrophysics Data System (ADS)
Miguel, M. Carmen; Parley, Jack T.; Pastor-Satorras, Romualdo
2018-02-01
Social relationships characterize the interactions that occur within social species and may have an important impact on collective animal motion. Here, we consider a variation of the standard Vicsek model for collective motion in which interactions are mediated by an empirically motivated scale-free topology that represents a heterogeneous pattern of social contacts. We observe that the degree of order of the model is strongly affected by network heterogeneity: more heterogeneous networks show a more resilient ordered state, while less heterogeneity leads to a more fragile ordered state that can be destroyed by sufficient external noise. Our results challenge the previously accepted equivalence between the static Vicsek model and the equilibrium X Y model on the network of connections, and point towards a possible equivalence with models exhibiting a different symmetry.
Horn, Kevin M.
2013-07-09
A method reconstructs the charge collection from regions beneath opaque metallization of a semiconductor device, as determined from focused laser charge collection response images, and thereby derives a dose-rate dependent correction factor for subsequent broad-area, dose-rate equivalent, laser measurements. The position- and dose-rate dependencies of the charge-collection magnitude of the device are determined empirically and can be combined with a digital reconstruction methodology to derive an accurate metal-correction factor that permits subsequent absolute dose-rate response measurements to be derived from laser measurements alone. Broad-area laser dose-rate testing can thereby be used to accurately determine the peak transient current, dose-rate response of semiconductor devices to penetrating electron, gamma- and x-ray irradiation.
Effect of risk ladder format on risk perception in high- and low-numerate individuals.
Keller, Carmen; Siegrist, Michael; Visschers, Vivianne
2009-09-01
Utilizing a random sample from the general population (N= 257), we examined the effect of the radon risk ladder on risk perception, as qualified by respondents' numeracy. The radon risk ladder provides comparative risk information about the radon equivalent of smoking risk. We compared a risk ladder providing smoking risk information with a risk ladder not providing this information. A 2 (numeracy; high, low) x 3 (risk level; high, medium, low) x 2 (smoking risk comparison: with/without) between subjects experimental design was used. A significant (p < 0.045) three-way interaction between format, risk level, and numeracy was identified. Participants with low numeracy skills, as well as participants with high numeracy skills, generally distinguished between low, medium, and high risk levels when the risk ladder with comparative smoking risk information was presented. When the risk ladder without the comparative information about the smoking risk was presented, low-numerate individuals differentiated between risk levels to a much lesser extent than high-numerate individuals did. These results provide empirical evidence that the risk ladder can be a useful tool in enabling people to interpret various risk levels. Additionally, these results allow us to conclude that providing comparative information within a risk ladder is particularly helpful to the understanding of different risk levels by people with low numeracy skills.
Structure factor of liquid alkali metals using a classical-plasma reference system
NASA Astrophysics Data System (ADS)
Pastore, G.; Tosi, M. P.
1984-06-01
This paper presents calculations of the liquid structure factor of the alkali metals near freezing, starting from the classical plasma of bare ions as reference liquid. The indirect ion-ion interaction arising from electronic screening is treated by an optimized random phase approximation (ORPA), imposing physical requirements as in the original ORPA scheme developed by Weeks, Chandler and Andersen for liquids with strongly repulsive core potentials. A comparison of the results with computer simulation data for a model of liquid rubidium shows that the present approach overcomes the well-known difficulties met in applying to these metals the standard ORPA based on a reference liquid of neutral hard spheres. The optimization scheme is also shown to be equivalent to a reduction of the range of the indirect interaction in momentum space, as proposed empirically in an earlier work. Comparison with experiment for the other alkalis shows that a good overall representation of the data can be obtained for sodium, potassium and cesium, but not for lithium, when one uses a very simple form of the electron-ion potential adjusted to the liquid compressibility. The small-angle scattering region is finally examined more carefully in the light of recent data of Waseda, with a view to possible refinements of the pseudopotential model.
Wolfson, Julia A; Graham, Dan J; Bleich, Sara N
2017-01-01
Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Attention to and attitudes about activity-equivalent calorie information. Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
An empirical model for inverted-velocity-profile jet noise prediction
NASA Technical Reports Server (NTRS)
Stone, J. R.
1977-01-01
An empirical model for predicting the noise from inverted-velocity-profile coaxial or coannular jets is presented and compared with small-scale static and simulated flight data. The model considered the combined contributions of as many as four uncorrelated constituent sources: the premerged-jet/ambient mixing region, the merged-jet/ambient mixing region, outer-stream shock/turbulence interaction, and inner-stream shock/turbulence interaction. The noise from the merged region occurs at relatively low frequency and is modeled as the contribution of a circular jet at merged conditions and total exhaust area, with the high frequencies attenuated. The noise from the premerged region occurs at high frequency and is modeled as the contribution of an equivalent plug nozzle at outer stream conditions, with the low frequencies attenuated.
Foundations of quantum gravity: The role of principles grounded in empirical reality
NASA Astrophysics Data System (ADS)
Holman, Marc
2014-05-01
When attempting to assess the strengths and weaknesses of various principles in their potential role of guiding the formulation of a theory of quantum gravity, it is crucial to distinguish between principles which are strongly supported by empirical data - either directly or indirectly - and principles which instead (merely) rely heavily on theoretical arguments for their justification. Principles in the latter category are not necessarily invalid, but their a priori foundational significance should be regarded with due caution. These remarks are illustrated in terms of the current standard models of cosmology and particle physics, as well as their respective underlying theories, i.e., essentially general relativity and quantum (field) theory. For instance, it is clear that both standard models are severely constrained by symmetry principles: an effective homogeneity and isotropy of the known universe on the largest scales in the case of cosmology and an underlying exact gauge symmetry of nuclear and electromagnetic interactions in the case of particle physics. However, in sharp contrast to the cosmological situation, where the relevant symmetry structure is more or less established directly on observational grounds, all known, nontrivial arguments for the "gauge principle" are purely theoretical (and far less conclusive than usually advocated). Similar remarks apply to the larger theoretical structures represented by general relativity and quantum (field) theory, where - actual or potential - empirical principles, such as the (Einstein) equivalence principle or EPR-type nonlocality, should be clearly differentiated from theoretical ones, such as general covariance or renormalizability. It is argued that if history is to be of any guidance, the best chance to obtain the key structural features of a putative quantum gravity theory is by deducing them, in some form, from the appropriate empirical principles (analogous to the manner in which, say, the idea that gravitation is a curved spacetime phenomenon is arguably implied by the equivalence principle). Theoretical principles may still be useful however in formulating a concrete theory (analogous to the manner in which, say, a suitable form of general covariance can still act as a sieve for separating theories of gravity from one another). It is subsequently argued that the appropriate empirical principles for deducing the key structural features of quantum gravity should at least include (i) quantum nonlocality, (ii) irreducible indeterminacy (or, essentially equivalently, given (i), relativistic causality), (iii) the thermodynamic arrow of time, (iv) homogeneity and isotropy of the observable universe on the largest scales. In each case, it is explained - when appropriate - how the principle in question could be implemented mathematically in a theory of quantum gravity, why it is considered to be of fundamental significance and also why contemporary accounts of it are insufficient. For instance, the high degree of uniformity observed in the Cosmic Microwave Background is usually regarded as theoretically problematic because of the existence of particle horizons, whereas the currently popular attempts to resolve this situation in terms of inflationary models are, for a number of reasons, less than satisfactory. However, rather than trying to account for the required empirical features dynamically, an arguably much more fruitful approach consists in attempting to account for these features directly, in the form of a lawlike initial condition within a theory of quantum gravity.
The equivalent thermal properties of a single fracture
NASA Astrophysics Data System (ADS)
Sangaré, D.; Thovert, J.-F.; Adler, P. M.
2008-10-01
The normal resistance and the tangential conductivity of a single fracture with Gaussian or self-affine surfaces are systematically studied as functions of the nature of the materials in contact and of the geometrical parameters. Analytical formulas are provided in the lubrication limit for fractures with sinusoidal apertures; these formulas are used to substantiate empirical formulas for resistance and conductivity. Other approximations based on the combination of series and parallel formulas are tested.
Planning for Coupling Effects in Bitoric Mixed Astigmatism Ablative Treatments.
Alpins, Noel; Ong, James K Y; Stamatelatos, George
2017-08-01
To demonstrate how to determine the historical coupling adjustments of bitoric mixed astigmatism ablative treatments and how to use these historical coupling adjustments to adjust future bitoric treatments. The individual coupling adjustments of the myopic and hyperopic cylindrical components of a bitoric treatment were derived empirically from a retrospective study where the theoretical combined treatment effect on spherical equivalent was compared to the actual change in refractive spherical equivalent. The coupling adjustments that provided the best fit in both mean and standard deviation were determined to be the historical coupling adjustments. Theoretical treatments that incorporated the historical coupling adjustments were then calculated. The actual distribution of postoperative spherical equivalent errors was compared to the theoretically adjusted distribution. The study group comprised 242 eyes and included 118 virgin right eyes and 124 virgin left eyes of 155 individuals. For the laser used, the myopic coupling adjustment was -0.02 and the hyperopic coupling adjustment was 0.30, as derived by global nonlinear optimization. This implies that almost no adjustment of the myopic component of the bitoric treatment is necessary, but that the hyperopic component of the bitoric treatment generates a large amount of unintended spherical shift. The theoretically adjusted treatments targeted zero mean spherical equivalent error, as intended, and the distribution of the theoretical spherical equivalent errors had the same spread as the distribution of actual postoperative spherical equivalent errors. Bitoric mixed astigmatism ablative treatments may display non-trivial coupling effects. Historical coupling adjustments should be taken into consideration when planning mixed astigmatism treatments to improve surgical outcomes. [J Refract Surg. 2017;33(8):545-551.]. Copyright 2017, SLACK Incorporated.
Fung, Monica; Kim, Jane; Marty, Francisco M; Schwarzinger, Michaël; Koo, Sophia
2015-01-01
Invasive fungal disease (IFD) causes significant morbidity and mortality in hematologic malignancy patients with high-risk febrile neutropenia (FN). These patients therefore often receive empirical antifungal therapy. Diagnostic test-guided pre-emptive antifungal therapy has been evaluated as an alternative treatment strategy in these patients. We conducted an electronic search for literature comparing empirical versus pre-emptive antifungal strategies in FN among adult hematologic malignancy patients. We systematically reviewed 9 studies, including randomized-controlled trials, cohort studies, and feasibility studies. Random and fixed-effect models were used to generate pooled relative risk estimates of IFD detection, IFD-related mortality, overall mortality, and rates and duration of antifungal therapy. Heterogeneity was measured via Cochran's Q test, I2 statistic, and between study τ2. Incorporating these parameters and direct costs of drugs and diagnostic testing, we constructed a comparative costing model for the two strategies. We conducted probabilistic sensitivity analysis on pooled estimates and one-way sensitivity analyses on other key parameters with uncertain estimates. Nine published studies met inclusion criteria. Compared to empirical antifungal therapy, pre-emptive strategies were associated with significantly lower antifungal exposure (RR 0.48, 95% CI 0.27-0.85) and duration without an increase in IFD-related mortality (RR 0.82, 95% CI 0.36-1.87) or overall mortality (RR 0.95, 95% CI 0.46-1.99). The pre-emptive strategy cost $324 less (95% credible interval -$291.88 to $418.65 pre-emptive compared to empirical) than the empirical approach per FN episode. However, the cost difference was influenced by relatively small changes in costs of antifungal therapy and diagnostic testing. Compared to empirical antifungal therapy, pre-emptive antifungal therapy in patients with high-risk FN may decrease antifungal use without increasing mortality. We demonstrate a state of economic equipoise between empirical and diagnostic-directed pre-emptive antifungal treatment strategies, influenced by small changes in cost of antifungal therapy and diagnostic testing, in the current literature. This work emphasizes the need for optimization of existing fungal diagnostic strategies, development of more efficient diagnostic strategies, and less toxic and more cost-effective antifungals.
Zhang, J; Feng, J-Y; Ni, Y-L; Wen, Y-J; Niu, Y; Tamba, C L; Yue, C; Song, Q; Zhang, Y-M
2017-06-01
Multilocus genome-wide association studies (GWAS) have become the state-of-the-art procedure to identify quantitative trait nucleotides (QTNs) associated with complex traits. However, implementation of multilocus model in GWAS is still difficult. In this study, we integrated least angle regression with empirical Bayes to perform multilocus GWAS under polygenic background control. We used an algorithm of model transformation that whitened the covariance matrix of the polygenic matrix K and environmental noise. Markers on one chromosome were included simultaneously in a multilocus model and least angle regression was used to select the most potentially associated single-nucleotide polymorphisms (SNPs), whereas the markers on the other chromosomes were used to calculate kinship matrix as polygenic background control. The selected SNPs in multilocus model were further detected for their association with the trait by empirical Bayes and likelihood ratio test. We herein refer to this method as the pLARmEB (polygenic-background-control-based least angle regression plus empirical Bayes). Results from simulation studies showed that pLARmEB was more powerful in QTN detection and more accurate in QTN effect estimation, had less false positive rate and required less computing time than Bayesian hierarchical generalized linear model, efficient mixed model association (EMMA) and least angle regression plus empirical Bayes. pLARmEB, multilocus random-SNP-effect mixed linear model and fast multilocus random-SNP-effect EMMA methods had almost equal power of QTN detection in simulation experiments. However, only pLARmEB identified 48 previously reported genes for 7 flowering time-related traits in Arabidopsis thaliana.
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
NASA Astrophysics Data System (ADS)
Bora, Sanjay; Scherbaum, Frank; Kuehn, Nicolas; Stafford, Peter
2016-04-01
Often, scaling of response spectral amplitudes, (e.g., spectral acceleration) obtained from empirical ground motion prediction equations (GMPEs), with respect to commonly used seismological parameters such as magnitude, distance and site condition is assumed/referred to be representing a similar scaling of Fourier spectral amplitudes. For instance, the distance scaling of response spectral amplitudes is related with the geometrical spreading of seismic waves. Such comparison of scaling of response spectral amplitudes with that of corresponding Fourier spectral amplitudes is motivated by that, the functional forms of response spectral GMPEs are often derived using the concepts borrowed from Fourier spectral modeling of ground motion. As these GMPEs are subsequently calibrated with empirical observations, this may not appear to pose any major problems in the prediction of ground motion for a particular earthquake scenario. However, the assumption that the Fourier spectral concepts persist for response spectra can lead to undesirable consequences when it comes to the adjustment of response spectral GMPEs to represent conditions not covered in the original empirical data set. In this context, a couple of important questions arise, e.g., what are the distinctions and/or similarities between Fourier and response spectra of ground-motions? And, if they are different, then what is the mechanism responsible for such differences and how do adjustments that are made to FAS manifest in response spectra? We explore the relationship between the Fourier and response spectrum of ground motion by using random vibration theory (RVT). With a simple Brune (1970, 1971) source model, RVT-generated acceleration spectra for a fixed magnitude and distance scenario are used. The RVT analyses reveal that the scaling of low oscillator-frequency response spectral ordinates can be treated as being equivalent to the scaling of the corresponding Fourier spectral ordinates. However, the high oscillator-frequency response spectral ordinates are controlled by a rather wide-band of Fourier spectral ordinates. In fact, the peak ground acceleration (PGA), counter to the popular perception that it is a reflection of the high-frequency characteristics of ground motion, is controlled by the entire Fourier spectrum of ground-motion. Finally, it is demonstrated, how an adjustment made in Fourier spectral amplitudes is different or similar to the same adjustment made in the response spectral amplitudes. For this purpose, two cases: adjustments to the stress parameter (Δσ) (source term) and to attributes reflecting site response (V s-κ0) are considered.
Duchesne, Thierry; Fortin, Daniel; Rivest, Louis-Paul
2015-01-01
Animal movement has a fundamental impact on population and community structure and dynamics. Biased correlated random walks (BCRW) and step selection functions (SSF) are commonly used to study movements. Because no studies have contrasted the parameters and the statistical properties of their estimators for models constructed under these two Lagrangian approaches, it remains unclear whether or not they allow for similar inference. First, we used the Weak Law of Large Numbers to demonstrate that the log-likelihood function for estimating the parameters of BCRW models can be approximated by the log-likelihood of SSFs. Second, we illustrated the link between the two approaches by fitting BCRW with maximum likelihood and with SSF to simulated movement data in virtual environments and to the trajectory of bison (Bison bison L.) trails in natural landscapes. Using simulated and empirical data, we found that the parameters of a BCRW estimated directly from maximum likelihood and by fitting an SSF were remarkably similar. Movement analysis is increasingly used as a tool for understanding the influence of landscape properties on animal distribution. In the rapidly developing field of movement ecology, management and conservation biologists must decide which method they should implement to accurately assess the determinants of animal movement. We showed that BCRW and SSF can provide similar insights into the environmental features influencing animal movements. Both techniques have advantages. BCRW has already been extended to allow for multi-state modeling. Unlike BCRW, however, SSF can be estimated using most statistical packages, it can simultaneously evaluate habitat selection and movement biases, and can easily integrate a large number of movement taxes at multiple scales. SSF thus offers a simple, yet effective, statistical technique to identify movement taxis.
2016-09-01
is to fit empirical Beta distributions to observed data, and then to use a randomization approach to make inferences on the difference between...a Ridit analysis on the often sparse data sets in many Flying Qualities applicationsi. The method of this paper is to fit empirical Beta ...One such measure is the discrete- probability-distribution version of the (squared) ‘Hellinger Distance’ (Yang & Le Cam , 2000) 2(, ) = 1
Rapee, Ronald M; Lyneham, Heidi J; Wuthrich, Viviana; Chatterton, Mary Lou; Hudson, Jennifer L; Kangas, Maria; Mihalopoulos, Cathrine
2017-10-01
Stepped care is embraced as an ideal model of service delivery but is minimally evaluated. The aim of this study was to evaluate the efficacy of cognitive-behavioral therapy (CBT) for child anxiety delivered via a stepped-care framework compared against a single, empirically validated program. A total of 281 youth with anxiety disorders (6-17 years of age) were randomly allocated to receive either empirically validated treatment or stepped care involving the following: (1) low intensity; (2) standard CBT; and (3) individually tailored treatment. Therapist qualifications increased at each step. Interventions did not differ significantly on any outcome measures. Total therapist time per child was significantly shorter to deliver stepped care (774 minutes) compared with best practice (897 minutes). Within stepped care, the first 2 steps returned the strongest treatment gains. Stepped care and a single empirically validated program for youth with anxiety produced similar efficacy, but stepped care required slightly less therapist time. Restricting stepped care to only steps 1 and 2 would have led to considerable time saving with modest loss in efficacy. Clinical trial registration information-A Randomised Controlled Trial of Standard Care Versus Stepped Care for Children and Adolescents With Anxiety Disorders; http://anzctr.org.au/; ACTRN12612000351819. Copyright © 2017 American Academy of Child and Adolescent Psychiatry. Published by Elsevier Inc. All rights reserved.
Ngamukote, Sathaporn; Khannongpho, Teerawat; Siriwatanapaiboon, Marent; Sirikwanpong, Sukrit; Dahlan, Winai; Adisakwattana, Sirichai
2016-12-29
To investigate the effect of Moringa Oleifera leaf extract (MOLE) on plasma glucose concentration and antioxidant status in healthy volunteers. A randomized crossover design was used in this study. Healthy volunteers were randomly assigned to receive either 200 mL of warm water (10 cases) or 200 mL of MOLE (500 mg dried extract, 10 cases). Blood samples were drawn at 0, 30, 60, 90, and 120 min for measuring fasting plasma glucose (FPG), ferric reducing ability of plasma (FRAP), Trolox equivalent antioxidant capacity (TEAC) and malondialdehyde (MDA). FPG concentration was not signifificantly different between warm water and MOLE. The consumption of MOLE acutely improved both FRAP and TEAC, with increases after 30 min of 30 μmol/L FeSO 4 equivalents and 0.18 μmol/L Trolox equivalents, respectively. The change in MDA level from baseline was signifificantly lowered after the ingestion of MOLE at 30, 60, and 90 min. In addition, FRAP level was negatively correlated with plasma MDA level after an intake of MOLE. MOLE increased plasma antioxidant capacity without hypoglycemia in human. The consumption of MOLE may reduce the risk factors associated with chronic degenerative diseases.
NASA Astrophysics Data System (ADS)
Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon
2017-03-01
Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.
Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models
NASA Astrophysics Data System (ADS)
Giordano, Stefano
2017-12-01
Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.
ERIC Educational Resources Information Center
St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly
2014-01-01
Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…
ERIC Educational Resources Information Center
Toby, Megan; Ma, Boya; Jaciw, Andrew; Cabalo, Jessica
2008-01-01
PCI Education sought scientifically based evidence on the effectiveness of the "PCI Reading Program--Level One" for students with severe disabilities. During the 2007-2008 academic year. Empirical Education conducted a randomized control trial (RCT) in two Florida districts, Brevard and Miami-Dade County Public Schools. For this…
ERIC Educational Resources Information Center
Empirical Education Inc., 2008
2008-01-01
PCI Education sought scientifically based evidence on the effectiveness of the "PCI Reading Program--Level One" for students with severe disabilities. During the 2007-2008 academic year. Empirical Education conducted a randomized control trial (RCT) in two Florida districts, Brevard and Miami-Dade County Public Schools. For this…
USDA-ARS?s Scientific Manuscript database
Dietary recommendations suggest decreased consumption of SFA to minimize CVD risk; however, not all foods rich in SFA are equivalent. To evaluate the effects of SFA in a dairy food matrix, as Cheddar cheese, v. SFA from a vegan-alternative test meal on postprandial inflammatory markers, a randomized...
A random forest learning assisted "divide and conquer" approach for peptide conformation search.
Chen, Xin; Yang, Bing; Lin, Zijing
2018-06-11
Computational determination of peptide conformations is challenging as it is a problem of finding minima in a high-dimensional space. The "divide and conquer" approach is promising for reliably reducing the search space size. A random forest learning model is proposed here to expand the scope of applicability of the "divide and conquer" approach. A random forest classification algorithm is used to characterize the distributions of the backbone φ-ψ units ("words"). A random forest supervised learning model is developed to analyze the combinations of the φ-ψ units ("grammar"). It is found that amino acid residues may be grouped as equivalent "words", while the φ-ψ combinations in low-energy peptide conformations follow a distinct "grammar". The finding of equivalent words empowers the "divide and conquer" method with the flexibility of fragment substitution. The learnt grammar is used to improve the efficiency of the "divide and conquer" method by removing unfavorable φ-ψ combinations without the need of dedicated human effort. The machine learning assisted search method is illustrated by efficiently searching the conformations of GGG/AAA/GGGG/AAAA/GGGGG through assembling the structures of GFG/GFGG. Moreover, the computational cost of the new method is shown to increase rather slowly with the peptide length.
NASA Technical Reports Server (NTRS)
Berlad, Abraham L
1954-01-01
Flame quenching by a variable-width rectangular-slot burner as a function of pressure for various propane-oxygen-nitrogen mixtures was investigated. It was found that for cold gas temperatures of 27 degrees C, pressures of 0.1 ro 1.0 atmosphere, and volumetric oxygen reactions of the oxidant of 0.17, 0.21, 0.30, 0.50, and 0.70, the relation between pressure p and quenching distance d is approximately given by d (unity) p (superscript -r) with r = 1, for equivalence ratios approximately equal to one. The quenching equation of Simon and Belles was tested. For equivalence ratios less than or equal to unity, this equation may by used, together with one empirical constant, to predict the observed quenching distance within 4.2 percent. The equation in it's present form does not appear to be suitable for values of the equivalence ratio greater than unity. A quantitative theoretical investigation has also been made of the error implicit in the assumption that flame quenching by plane parallel plates of infinite extent is equivalent to that of a rectangular burner. A curve is presented which relates the magnitude of this error to the length-to-width ratio of the rectangular burner.
Design optimization of large-size format edge-lit light guide units
NASA Astrophysics Data System (ADS)
Hastanin, J.; Lenaerts, C.; Fleury-Frenette, K.
2016-04-01
In this paper, we present an original method of dot pattern generation dedicated to large-size format light guide plate (LGP) design optimization, such as photo-bioreactors, the number of dots greatly exceeds the maximum allowable number of optical objects supported by most common ray-tracing software. In the proposed method, in order to simplify the computational problem, the original optical system is replaced by an equivalent one. Accordingly, an original dot pattern is splitted into multiple small sections, inside which the dot size variation is less than the ink dots printing typical resolution. Then, these sections are replaced by equivalent cells with continuous diffusing film. After that, we adjust the TIS (Total Integrated Scatter) two-dimensional distribution over the grid of equivalent cells, using an iterative optimization procedure. Finally, the obtained optimal TIS distribution is converted into the dot size distribution by applying an appropriate conversion rule. An original semi-empirical equation dedicated to rectangular large-size LGPs is proposed for the initial guess of TIS distribution. It allows significantly reduce the total time needed to dot pattern optimization.
Allen, Megan; Leslie, Kate; Hebbard, Geoffrey; Jones, Ian; Mettho, Tejinder; Maruff, Paul
2015-11-01
This study aimed to determine if the incidence of recall was equivalent between light and deep sedation for colonoscopy. Secondary analysis included complications, patient clinical recovery, and post-procedure cognitive impairment. Two hundred patients undergoing elective outpatient colonoscopy were randomized to light (bispectral index [BIS] 70-80) or deep (BIS < 60) sedation with propofol and fentanyl. Recall was assessed by the modified Brice questionnaire, and cognition at baseline and discharge was assessed using a Cogstate test battery. The median (interquartile range [IQR]) BIS values were different in the two groups (69 [65-74] light sedation vs 53 [46-59] deep sedation; P < 0.0001). The incidence of recall was 12% in the light sedation group and 1% in the deep sedation group. The risk difference for recall was 0.11 (90% confidence interval, 0.05 to 0.17) in the intention-to-treat analysis, thus refuting equivalence in recall between light and deep sedation (0.05 significance level; 10% equivalence margin). Overall sedation-related complications were more frequent with deep sedation than with light sedation (66% vs 47%, respectively; P = 0.008). Recovery was more rapid with light sedation than with deep sedation as determined by the mean (SD) time to reach a score of 5 on the Modified Observer's Assessment of Alertness/Sedation Scale [3 (4) min vs 7 (4) min, respectively; P < 0.001] and by the median [IQR] time to readiness for hospital discharge (65 [57-80] min vs 74 [63-86] min, respectively; P = 0.001). The incidence of post-procedural cognitive impairment was similar in those randomized to light (19%) vs deep (16%) sedation (P = 0.554). Light sedation was not equivalent to deep sedation for procedural recall, the spectrum of complications, or recovery times. This study provides evidence to inform discussions with patients about sedation for colonoscopy. This trial was registered at the Australian and New Zealand Clinical Trials Registry, number 12611000320954.
Day, Frank C.; Srinivasan, Malathi; Der-Martirosian, Claudia; Griffin, Erin; Hoffman, Jerome R.; Wilkes, Michael S.
2014-01-01
Purpose Few studies have compared the effect of web-based eLearning versus small-group learning on medical student outcomes. Palliative and end-of-life (PEOL) education is ideal for this comparison, given uneven access to PEOL experts and content nationally. Method In 2010, the authors enrolled all third-year medical students at the University of California, Davis School of Medicine into a quasi-randomized controlled trial of web-based interactive education (eDoctoring) compared to small-group education (Doctoring) on PEOL clinical content over two months. All students participated in three 3-hour PEOL sessions with similar content. Outcomes included a 24-item PEOL-specific self-efficacy scale with three domains (diagnosis/treatment [Cronbach’s alpha = 0.92, CI: 0.91–0.93], communication/prognosis [alpha = 0.95; CI: 0.93–0.96], and social impact/self-care [alpha = 0.91; CI: 0.88–0.92]); eight knowledge items; ten curricular advantage/disadvantages, and curricular satisfaction (both students and faculty). Results Students were randomly assigned to web-based eDoctoring (n = 48) or small-group Doctoring (n = 71) curricula. Self-efficacy and knowledge improved equivalently between groups: e.g., prognosis self-efficacy, 19%; knowledge, 10–42%. Student and faculty ratings of the web-based eDoctoring curriculum and the small group Doctoring curriculum were equivalent for most goals, and overall satisfaction was equivalent for each, with a trend towards decreased eDoctoring student satisfaction. Conclusions Findings showed equivalent gains in self-efficacy and knowledge between students participating in a web-based PEOL curriculum, in comparison to students learning similar content in a small-group format. Web-based curricula can standardize content presentation when local teaching expertise is limited, but may lead to decreased user satisfaction. PMID:25539518
Muhammad, Amber A.; Shafiq, Yasir; Shah, Saima; Kumar, Naresh; Ahmed, Imran; Azam, Iqbal; Pasha, Omrana; Zaidi, Anita K. M.
2017-01-01
Abstract Background. Integrated Management of Childhood Illness recommends that young infants with isolated fast breathing be referred to a hospital for antibiotic treatment, which is often impractical in resource-limited settings. Additionally, antibiotics may be unnecessary for physiologic tachypnea in otherwise well newborns. We tested the hypothesis that ambulatory treatment with oral amoxicillin for 7 days was equivalent (similarity margin of 3%) to placebo in young infants with isolated fast breathing in primary care settings where hospital referral is often unfeasible. Methods. This randomized equivalence trial was conducted in 4 primary health centers of Karachi, Pakistan. Infants presenting with isolated fast breathing and oxygen saturation ≥90% were randomly assigned to receive either oral amoxicillin or placebo twice daily for 7 days. Enrolled infants were followed on days 1–8, 11, and 14. The primary outcome was treatment failure by day 8, analyzed per protocol. The trial was stopped by the data safety monitoring board due to higher treatment failure rate and the occurrence of 2 deaths in the placebo arm in an interim analysis. Results. Four hundred twenty-three infants fulfilled per protocol criteria in the amoxicillin arm and 426 in the placebo arm. Twelve infants (2.8%) had treatment failure in the amoxicillin arm and 25 (5.9%) in the placebo arm (risk difference, 3.1; P value .04). Two infants in the placebo arm died, whereas no deaths occurred in the amoxicillin arm. Other adverse outcomes, as well as the proportions of relapse, were evenly distributed across both study arms. Conclusions. This trial failed to show equivalence of placebo to amoxicillin in the management of isolated fast breathing without hypoxemia or other clinical signs of illness in term young infants. Clinical Trials Registration. NCT01533818. PMID:27941119
Briand, Valérie; Bottero, Julie; Noël, Harold; Masse, Virginie; Cordel, Hugues; Guerra, José; Kossou, Hortense; Fayomi, Benjamin; Ayemonna, Paul; Fievet, Nadine; Massougbodji, Achille; Cot, Michel
2009-09-15
In the context of the increasing resistance to sulfadoxine-pyrimethamine (SP), we evaluated the efficacy of mefloquine (MQ) for intermittent preventive treatment during pregnancy (IPTp). A multicenter, open-label equivalence trial was conducted in Benin from July 2005 through April 2008. Women of all gravidities were randomized to receive SP (1500 mg of sulfadoxine and 75 mg of pyrimethamine) or 15 mg/kg MQ in a single intake twice during pregnancy. The primary end point was the proportion of low-birth-weight (LBW) infants (body weight, <2500 g; equivalence margin, 5%). A total of 1601 women were randomized to receive MQ (n=802)or SP (n=799).In the modified intention-to-treat analysis, which assessed only live singleton births, 59 (8%) of 735 women who were given MQ and 72 (9.8%) of 730 women who were given SP gave birth to LBW infants (difference between low birth weights in treatment groups, -1.8%; 95% confidence interval [CI], -4.8% to 1.1%]), establishing equivalence between the drugs. The per-protocol analysis showed consistent results. MQ was more efficacious than SP in preventing placental malaria (prevalence, 1.7% vs 4.4% of women; P = .005),clinical malaria (incidence rate, 26 cases/10,000 person-months vs. 68 cases/10,000 person-months; P = .007) and maternal anemia at delivery (as defined by a hemoglobin level <10 g/dL) (prevalence, 16% vs 20%; marginally significant at P = .09). Adverse events (mainly vomiting, dizziness, tiredness, and nausea) were more commonly associated with the use of MQ (prevalence, 78% vs 32%; P < 10(-3)) One woman in the MQ group had severe neuropsychiatric symptoms. MQ proved to be highly efficacious--both clinically and parasitologically--for use as IPTp. However, its low tolerability might impair its effectiveness and requires further investigations.
Citerio, Giuseppe; Franzosi, Maria Grazia; Latini, Roberto; Masson, Serge; Barlera, Simona; Guzzetti, Stefano; Pesenti, Antonio
2009-04-06
Many studies have attempted to determine the "best" anaesthetic technique for neurosurgical procedures in patients without intracranial hypertension. So far, no study comparing intravenous (IA) with volatile-based neuroanaesthesia (VA) has been able to demonstrate major outcome differences nor a superiority of one of the two strategies in patients undergoing elective supratentorial neurosurgery. Therefore, current practice varies and includes the use of either volatile or intravenous anaesthetics in addition to narcotics. Actually the choice of the anaesthesiological strategy depends only on the anaesthetists' preferences or institutional policies. This trial, named NeuroMorfeo, aims to assess the equivalence between volatile and intravenous anaesthetics for neurosurgical procedures. NeuroMorfeo is a multicenter, randomized, open label, controlled trial, based on an equivalence design. Patients aged between 18 and 75 years, scheduled for elective craniotomy for supratentorial lesion without signs of intracranial hypertension, in good physical state (ASA I-III) and Glasgow Coma Scale (GCS) equal to 15, are randomly assigned to one of three anaesthesiological strategies (two VA arms, sevoflurane + fentanyl or sevoflurane + remifentanil, and one IA, propofol + remifentanil). The equivalence between intravenous and volatile-based neuroanaesthesia will be evaluated by comparing the intervals required to reach, after anaesthesia discontinuation, a modified Aldrete score > or = 9 (primary end-point). Two statistical comparisons have been planned: 1) sevoflurane + fentanyl vs. propofol + remifentanil; 2) sevoflurane + remifentanil vs. propofol + remifentanil. Secondary end-points include: an assessment of neurovegetative stress based on (a) measurement of urinary catecholamines and plasma and urinary cortisol and (b) estimate of sympathetic/parasympathetic balance by power spectrum analyses of electrocardiographic tracings recorded during anaesthesia; intraoperative adverse events; evaluation of surgical field; postoperative adverse events; patient's satisfaction and analysis of costs. 411 patients will be recruited in 14 Italian centers during an 18-month period. We presented the development phase of this anaesthesiological on-going trial. The recruitment started December 4th, 2007 and up to 4th, December 2008, 314 patients have been enrolled.
Some Aspects of the Investigation of Random Vibration Influence on Ride Comfort
NASA Astrophysics Data System (ADS)
DEMIĆ, M.; LUKIĆ, J.; MILIĆ, Ž.
2002-05-01
Contemporary vehicles must satisfy high ride comfort criteria. This paper attempts to develop criteria for ride comfort improvement. The highest loading levels have been found to be in the vertical direction and the lowest in lateral direction in passenger cars and trucks. These results have formed the basis for further laboratory and field investigations. An investigation of the human body behaviour under random vibrations is reported in this paper. The research included two phases; biodynamic research and ride comfort investigation. A group of 30 subjects was tested. The influence of broadband random vibrations on the human body was examined through the seat-to-head transmissibility function (STHT). Initially, vertical and fore and aft vibrations were considered. Multi-directional vibration was also investigated. In the biodynamic research, subjects were exposed to 0·55, 1·75 and 2·25 m/s2 r.m.s. vibration levels in the 0·5- 40 Hz frequency domain. The influence of sitting position on human body behaviour under two axial vibrations was also examined. Data analysis showed that the human body behaviour under two-directional random vibrations could not be approximated by superposition of one-directional random vibrations. Non-linearity of the seated human body in the vertical and fore and aft directions was observed. Seat-backrest angle also influenced STHT. In the second phase of experimental research, a new method for the assessment of the influence of narrowband random vibration on the human body was formulated and tested. It included determination of equivalent comfort curves in the vertical and fore and aft directions under one- and two-directional narrowband random vibrations. Equivalent comfort curves for durations of 2·5, 4 and 8 h were determined.
Salminen, Paulina; Helmiö, Mika; Ovaska, Jari; Juuti, Anne; Leivonen, Marja; Peromaa-Haavisto, Pipsa; Hurme, Saija; Soinio, Minna; Nuutila, Pirjo; Victorzon, Mikael
2018-01-16
Laparoscopic sleeve gastrectomy for treatment of morbid obesity has increased substantially despite the lack of long-term results compared with laparoscopic Roux-en-Y gastric bypass. To determine whether laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass are equivalent for weight loss at 5 years in patients with morbid obesity. The Sleeve vs Bypass (SLEEVEPASS) multicenter, multisurgeon, open-label, randomized clinical equivalence trial was conducted from March 2008 until June 2010 in Finland. The trial enrolled 240 morbidly obese patients aged 18 to 60 years, who were randomly assigned to sleeve gastrectomy or gastric bypass with a 5-year follow-up period (last follow-up, October 14, 2015). Laparoscopic sleeve gastrectomy (n = 121) or laparoscopic Roux-en-Y gastric bypass (n = 119). The primary end point was weight loss evaluated by percentage excess weight loss. Prespecified equivalence margins for the clinical significance of weight loss differences between gastric bypass and sleeve gastrectomy were -9% to +9% excess weight loss. Secondary end points included resolution of comorbidities, improvement of quality of life (QOL), all adverse events (overall morbidity), and mortality. Among 240 patients randomized (mean age, 48 [SD, 9] years; mean baseline body mass index, 45.9, [SD, 6.0]; 69.6% women), 80.4% completed the 5-year follow-up. At baseline, 42.1% had type 2 diabetes, 34.6% dyslipidemia, and 70.8% hypertension. The estimated mean percentage excess weight loss at 5 years was 49% (95% CI, 45%-52%) after sleeve gastrectomy and 57% (95% CI, 53%-61%) after gastric bypass (difference, 8.2 percentage units [95% CI, 3.2%-13.2%], higher in the gastric bypass group) and did not meet criteria for equivalence. Complete or partial remission of type 2 diabetes was seen in 37% (n = 15/41) after sleeve gastrectomy and in 45% (n = 18/40) after gastric bypass (P > .99). Medication for dyslipidemia was discontinued in 47% (n = 14/30) after sleeve gastrectomy and 60% (n = 24/40) after gastric bypass (P = .15) and for hypertension in 29% (n = 20/68) and 51% (n = 37/73) (P = .02), respectively. There was no statistically significant difference in QOL between groups (P = .85) and no treatment-related mortality. At 5 years the overall morbidity rate was 19% (n = 23) for sleeve gastrectomy and 26% (n = 31) for gastric bypass (P = .19). Among patients with morbid obesity, use of laparoscopic sleeve gastrectomy compared with use of laparoscopic Roux-en-Y gastric bypass did not meet criteria for equivalence in terms of percentage excess weight loss at 5 years. Although gastric bypass compared with sleeve gastrectomy was associated with greater percentage excess weight loss at 5 years, the difference was not statistically significant, based on the prespecified equivalence margins. clinicaltrials.gov Identifier: NCT00793143.
Randomness in denoised stock returns: The case of Moroccan family business companies
NASA Astrophysics Data System (ADS)
Lahmiri, Salim
2018-02-01
In this paper, we scrutinize entropy in family business stocks listed on Casablanca stock exchange and market index to assess randomness in their returns. For this purpose, we adopt a novel approach based on combination of stationary wavelet transform and Tsallis entropy for empirical analysis of the return series. The obtained empirical results show strong evidence that their respective entropy functions are characterized by opposite dynamics. Indeed, the information contents of their respective dynamics are statistically and significantly different. Obviously, information on regular events carried by family business returns is more certain, whilst that carried by market returns is uncertain. Such results are definitively useful to understand the nonlinear dynamics on returns on family business companies and those of the market. Without a doubt, they could be helpful for quantitative portfolio managers and investors.
NASA Astrophysics Data System (ADS)
Wang, Gang-Jin; Xie, Chi; Chen, Shou; Yang, Jiao-Jiao; Yang, Ming-Yan
2013-09-01
In this study, we first build two empirical cross-correlation matrices in the US stock market by two different methods, namely the Pearson’s correlation coefficient and the detrended cross-correlation coefficient (DCCA coefficient). Then, combining the two matrices with the method of random matrix theory (RMT), we mainly investigate the statistical properties of cross-correlations in the US stock market. We choose the daily closing prices of 462 constituent stocks of S&P 500 index as the research objects and select the sample data from January 3, 2005 to August 31, 2012. In the empirical analysis, we examine the statistical properties of cross-correlation coefficients, the distribution of eigenvalues, the distribution of eigenvector components, and the inverse participation ratio. From the two methods, we find some new results of the cross-correlations in the US stock market in our study, which are different from the conclusions reached by previous studies. The empirical cross-correlation matrices constructed by the DCCA coefficient show several interesting properties at different time scales in the US stock market, which are useful to the risk management and optimal portfolio selection, especially to the diversity of the asset portfolio. It will be an interesting and meaningful work to find the theoretical eigenvalue distribution of a completely random matrix R for the DCCA coefficient because it does not obey the Marčenko-Pastur distribution.
Bateli, Maria; Ben Rahal, Ghada; Christmann, Marin; Vach, Kirstin; Kohal, Ralf-Joachim
2018-01-01
Objective To test whether or not the modified design of the test implant (intended to increase primary stability) has an equivalent effect on MBL compared to the control. Methods Forty patients were randomly assigned to receive test or control implants to be installed in identically dimensioned bony beds. Implants were radiographically monitored at installation, at prosthetic delivery, and after one year. Treatments were considered equivalent if the 90% confidence interval (CI) for the mean difference (MD) in MBL was in between −0.25 and 0.25 mm. Additionally, several soft tissue parameters and patient-reported outcome measures (PROMs) were evaluated. Linear mixed models were fitted for each patient to assess time effects on response variables. Results Thirty-three patients (21 males, 12 females; 58.2 ± 15.2 years old) with 81 implants (47 test, 34 control) were available for analysis after a mean observation period of 13.9 ± 4.5 months (3 dropouts, 3 missed appointments, and 1 missing file). The adjusted MD in MBL after one year was −0.13 mm (90% CI: −0.46–0.19; test group: −0.49; control group: −0.36; p = 0.507). Conclusion Both implant systems can be considered successful after one year of observation. Concerning MBL in the presented setup, equivalence of the treatments cannot be concluded. Registration This trial is registered with the German Clinical Trials Register (ID: DRKS00007877). PMID:29610765
Frequency of RNA–RNA interaction in a model of the RNA World
STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.
2006-01-01
The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
Vast Portfolio Selection with Gross-exposure Constraints*
Fan, Jianqing; Zhang, Jingjin; Yu, Ke
2012-01-01
We introduce the large portfolio selection using gross-exposure constraints. We show that with gross-exposure constraint the empirically selected optimal portfolios based on estimated covariance matrices have similar performance to the theoretical optimal ones and there is no error accumulation effect from estimation of vast covariance matrices. This gives theoretical justification to the empirical results in Jagannathan and Ma (2003). We also show that the no-short-sale portfolio can be improved by allowing some short positions. The applications to portfolio selection, tracking, and improvements are also addressed. The utility of our new approach is illustrated by simulation and empirical studies on the 100 Fama-French industrial portfolios and the 600 stocks randomly selected from Russell 3000. PMID:23293404
Numerical Homogenization of Jointed Rock Masses Using Wave Propagation Simulation
NASA Astrophysics Data System (ADS)
Gasmi, Hatem; Hamdi, Essaïeb; Bouden Romdhane, Nejla
2014-07-01
Homogenization in fractured rock analyses is essentially based on the calculation of equivalent elastic parameters. In this paper, a new numerical homogenization method that was programmed by means of a MATLAB code, called HLA-Dissim, is presented. The developed approach simulates a discontinuity network of real rock masses based on the International Society of Rock Mechanics (ISRM) scanline field mapping methodology. Then, it evaluates a series of classic joint parameters to characterize density (RQD, specific length of discontinuities). A pulse wave, characterized by its amplitude, central frequency, and duration, is propagated from a source point to a receiver point of the simulated jointed rock mass using a complex recursive method for evaluating the transmission and reflection coefficient for each simulated discontinuity. The seismic parameters, such as delay, velocity, and attenuation, are then calculated. Finally, the equivalent medium model parameters of the rock mass are computed numerically while taking into account the natural discontinuity distribution. This methodology was applied to 17 bench fronts from six aggregate quarries located in Tunisia, Spain, Austria, and Sweden. It allowed characterizing the rock mass discontinuity network, the resulting seismic performance, and the equivalent medium stiffness. The relationship between the equivalent Young's modulus and rock discontinuity parameters was also analyzed. For these different bench fronts, the proposed numerical approach was also compared to several empirical formulas, based on RQD and fracture density values, published in previous research studies, showing its usefulness and efficiency in estimating rapidly the Young's modulus of equivalent medium for wave propagation analysis.
ON NONSTATIONARY STOCHASTIC MODELS FOR EARTHQUAKES.
Safak, Erdal; Boore, David M.
1986-01-01
A seismological stochastic model for earthquake ground-motion description is presented. Seismological models are based on the physical properties of the source and the medium and have significant advantages over the widely used empirical models. The model discussed here provides a convenient form for estimating structural response by using random vibration theory. A commonly used random process for ground acceleration, filtered white-noise multiplied by an envelope function, introduces some errors in response calculations for structures whose periods are longer than the faulting duration. An alternate random process, filtered shot-noise process, eliminates these errors.
New modeling method for the dielectric relaxation of a DRAM cell capacitor
NASA Astrophysics Data System (ADS)
Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon
2018-02-01
This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
From random microstructures to representative volume elements
NASA Astrophysics Data System (ADS)
Zeman, J.; Šejnoha, M.
2007-06-01
A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.
Synthesized tissue-equivalent dielectric phantoms using salt and polyvinylpyrrolidone solutions.
Ianniello, Carlotta; de Zwart, Jacco A; Duan, Qi; Deniz, Cem M; Alon, Leeor; Lee, Jae-Seung; Lattanzi, Riccardo; Brown, Ryan
2018-07-01
To explore the use of polyvinylpyrrolidone (PVP) for simulated materials with tissue-equivalent dielectric properties. PVP and salt were used to control, respectively, relative permittivity and electrical conductivity in a collection of 63 samples with a range of solute concentrations. Their dielectric properties were measured with a commercial probe and fitted to a 3D polynomial in order to establish an empirical recipe. The material's thermal properties and MR spectra were measured. The empirical polynomial recipe (available at https://www.amri.ninds.nih.gov/cgi-bin/phantomrecipe) provides the PVP and salt concentrations required for dielectric materials with permittivity and electrical conductivity values between approximately 45 and 78, and 0.1 to 2 siemens per meter, respectively, from 50 MHz to 4.5 GHz. The second- (solute concentrations) and seventh- (frequency) order polynomial recipe provided less than 2.5% relative error between the measured and target properties. PVP side peaks in the spectra were minor and unaffected by temperature changes. PVP-based phantoms are easy to prepare and nontoxic, and their semitransparency makes air bubbles easy to identify. The polymer can be used to create simulated material with a range of dielectric properties, negligible spectral side peaks, and long T 2 relaxation time, which are favorable in many MR applications. Magn Reson Med 80:413-419, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.
NASA Astrophysics Data System (ADS)
Bellver, Fernando Gimeno; Garratón, Manuel Caravaca; Soto Meca, Antonio; López, Juan Antonio Vera; Guirao, Juan L. G.; Fernández-Martínez, Manuel
In this paper, we explore the chaotic behavior of resistively and capacitively shunted Josephson junctions via the so-called Network Simulation Method. Such a numerical approach establishes a formal equivalence among physical transport processes and electrical networks, and hence, it can be applied to efficiently deal with a wide range of differential systems. The generality underlying that electrical equivalence allows to apply the circuit theory to several scientific and technological problems. In this work, the Fast Fourier Transform has been applied for chaos detection purposes and the calculations have been carried out in PSpice, an electrical circuit software. Overall, it holds that such a numerical approach leads to quickly computationally solve Josephson differential models. An empirical application regarding the study of the Josephson model completes the paper.
Kraeutler, Matthew J; Reynolds, Kirk A; Long, Cyndi; McCarty, Eric C
2015-06-01
The purpose of this study was to compare the effect of compressive cryotherapy (CC) vs. ice on postoperative pain in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. A commercial device was used for postoperative CC. A standard ice wrap (IW) was used for postoperative cryotherapy alone. Patients scheduled for rotator cuff repair or subacromial decompression were consented and randomized to 1 of 2 groups; patients were randomized to use either CC or a standard IW for the first postoperative week. All patients were asked to complete a "diary" each day, which included visual analog scale scores based on average daily pain and worst daily pain as well as total pain medication usage. Pain medications were then converted to a morphine equivalent dosage. Forty-six patients completed the study and were available for analysis; 25 patients were randomized to CC and 21 patients were randomized to standard IW. No significant differences were found in average pain, worst pain, or morphine equivalent dosage on any day. There does not appear to be a significant benefit to use of CC over standard IW in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. Further study is needed to determine if CC devices are a cost-effective option for postoperative pain management in this population of patients. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Empirical Analysis and Refinement of Expert System Knowledge Bases
1988-08-31
refinement. Both a simulated case generation program, and a random rule basher were developed to enhance rule refinement experimentation. *Substantial...the second fiscal year 88 objective was fully met. Rule Refinement System Simulated Rule Basher Case Generator Stored Cases Expert System Knowledge...generated until the rule is satisfied. Cases may be randomly generated for a given rule or hypothesis. Rule Basher Given that one has a correct
Response of space shuttle insulation panels to acoustic noise pressure
NASA Technical Reports Server (NTRS)
Vaicaitis, R.
1976-01-01
The response of reusable space shuttle insulation panels to random acoustic pressure fields are studied. The basic analytical approach in formulating the governing equations of motion uses a Rayleigh-Ritz technique. The input pressure field is modeled as a stationary Gaussian random process for which the cross-spectral density function is known empirically from experimental measurements. The response calculations are performed in both frequency and time domain.
2013-01-01
Purpose Nondegradable steel-and titanium-based implants are commonly used in orthopedic surgery. Although they provide maximal stability, they are also associated with interference on imaging modalities, may induce stress shielding, and additional explantation procedures may be necessary. Alternatively, degradable polymer implants are mechanically weaker and induce foreign body reactions. Degradable magnesium-based stents are currently being investigated in clinical trials for use in cardiovascular medicine. The magnesium alloy MgYREZr demonstrates good biocompatibility and osteoconductive properties. The aim of this prospective, randomized, clinical pilot trial was to determine if magnesium-based MgYREZr screws are equivalent to standard titanium screws for fixation during chevron osteotomy in patients with a mild hallux valgus. Methods Patients (n=26) were randomly assigned to undergo osteosynthesis using either titanium or degradable magnesium-based implants of the same design. The 6 month follow-up period included clinical, laboratory, and radiographic assessments. Results No significant differences were found in terms of the American Orthopaedic Foot and Ankle Society (AOFAS) score for hallux, visual analog scale for pain assessment, or range of motion (ROM) of the first metatarsophalangeal joint (MTPJ). No foreign body reactions, osteolysis, or systemic inflammatory reactions were detected. The groups were not significantly different in terms of radiographic or laboratory results. Conclusion The radiographic and clinical results of this prospective controlled study demonstrate that degradable magnesium-based screws are equivalent to titanium screws for the treatment of mild hallux valgus deformities. PMID:23819489
Stevens, Richard D; Tello, J Sebastián; Gavilanez, María Mercedes
2013-01-01
Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km(2) grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota.
Stevens, Richard D.; Tello, J. Sebastián; Gavilanez, María Mercedes
2013-01-01
Inference involving diversity gradients typically is gathered by mechanistic tests involving single dimensions of biodiversity such as species richness. Nonetheless, because traits such as geographic range size, trophic status or phenotypic characteristics are tied to a particular species, mechanistic effects driving broad diversity patterns should manifest across numerous dimensions of biodiversity. We develop an approach of stronger inference based on numerous dimensions of biodiversity and apply it to evaluate one such putative mechanism: the mid-domain effect (MDE). Species composition of 10,000-km2 grid cells was determined by overlaying geographic range maps of 133 noctilionoid bat taxa. We determined empirical diversity gradients in the Neotropics by calculating species richness and three indices each of phylogenetic, functional and phenetic diversity for each grid cell. We also created 1,000 simulated gradients of each examined metric of biodiversity based on a MDE model to estimate patterns expected if species distributions were randomly placed within the Neotropics. For each simulation run, we regressed the observed gradient onto the MDE-expected gradient. If a MDE drives empirical gradients, then coefficients of determination from such an analysis should be high, the intercept no different from zero and the slope no different than unity. Species richness gradients predicted by the MDE fit empirical patterns. The MDE produced strong spatially structured gradients of taxonomic, phylogenetic, functional and phenetic diversity. Nonetheless, expected values generated from the MDE for most dimensions of biodiversity exhibited poor fit to most empirical patterns. The MDE cannot account for most empirical patterns of biodiversity. Fuller understanding of latitudinal gradients will come from simultaneous examination of relative effects of random, environmental and historical mechanisms to better understand distribution and abundance of the current biota. PMID:23451099
Equivalent Theories and Changing Hamiltonian Observables in General Relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2018-03-01
Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.
Equivalent Theories and Changing Hamiltonian Observables in General Relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2018-05-01
Change and local spatial variation are missing in Hamiltonian general relativity according to the most common definition of observables as having 0 Poisson bracket with all first-class constraints. But other definitions of observables have been proposed. In pursuit of Hamiltonian-Lagrangian equivalence, Pons, Salisbury and Sundermeyer use the Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints. Kuchař waived the 0 Poisson bracket condition for the Hamiltonian constraint to achieve changing observables. A systematic combination of the two reforms might use the gauge generator but permit non-zero Lie derivative Poisson brackets for the external gauge symmetry of General Relativity. Fortunately one can test definitions of observables by calculation using two formulations of a theory, one without gauge freedom and one with gauge freedom. The formulations, being empirically equivalent, must have equivalent observables. For de Broglie-Proca non-gauge massive electromagnetism, all constraints are second-class, so everything is observable. Demanding equivalent observables from gauge Stueckelberg-Utiyama electromagnetism, one finds that the usual definition fails while the Pons-Salisbury-Sundermeyer definition with G succeeds. This definition does not readily yield change in GR, however. Should GR's external gauge freedom of general relativity share with internal gauge symmetries the 0 Poisson bracket (invariance), or is covariance (a transformation rule) sufficient? A graviton mass breaks the gauge symmetry (general covariance), but it can be restored by parametrization with clock fields. By requiring equivalent observables, one can test whether observables should have 0 or the Lie derivative as the Poisson bracket with the gauge generator G. The latter definition is vindicated by calculation. While this conclusion has been reported previously, here the calculation is given in some detail.
A unifying framework for marginalized random intercept models of correlated binary outcomes
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.
2013-01-01
We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871
The blocked-random effect in pictures and words.
Toglia, M P; Hinman, P J; Dayton, B S; Catalano, J F
1997-06-01
Picture and word recall was examined in conjunction with list organization. 60 subjects studied a list of 30 items, either words or their pictorial equivalents. The 30 words/pictures, members of five conceptual categories, each represented by six exemplars, were presented either blocked by category or in a random order. While pictures were recalled better than words and a standard blocked-random effect was observed, the interaction indicated that the recall advantage of a blocked presentation was restricted to the word lists. A similar pattern emerged for clustering. These findings are discussed in terms of limitations upon the pictorial superiority effect.
Simulation of random road microprofile based on specified correlation function
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.
2018-03-01
The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.
Ghiglietti, Andrea; Scarale, Maria Giovanna; Miceli, Rosalba; Ieva, Francesca; Mariani, Luigi; Gavazzi, Cecilia; Paganoni, Anna Maria; Edefonti, Valeria
2018-03-22
Recently, response-adaptive designs have been proposed in randomized clinical trials to achieve ethical and/or cost advantages by using sequential accrual information collected during the trial to dynamically update the probabilities of treatment assignments. In this context, urn models-where the probability to assign patients to treatments is interpreted as the proportion of balls of different colors available in a virtual urn-have been used as response-adaptive randomization rules. We propose the use of Randomly Reinforced Urn (RRU) models in a simulation study based on a published randomized clinical trial on the efficacy of home enteral nutrition in cancer patients after major gastrointestinal surgery. We compare results with the RRU design with those previously published with the non-adaptive approach. We also provide a code written with the R software to implement the RRU design in practice. In detail, we simulate 10,000 trials based on the RRU model in three set-ups of different total sample sizes. We report information on the number of patients allocated to the inferior treatment and on the empirical power of the t-test for the treatment coefficient in the ANOVA model. We carry out a sensitivity analysis to assess the effect of different urn compositions. For each sample size, in approximately 75% of the simulation runs, the number of patients allocated to the inferior treatment by the RRU design is lower, as compared to the non-adaptive design. The empirical power of the t-test for the treatment effect is similar in the two designs.
NASA Astrophysics Data System (ADS)
Most, S.; Jia, N.; Bijeljic, B.; Nowak, W.
2016-12-01
Pre-asymptotic characteristics are almost ubiquitous when analyzing solute transport processes in porous media. These pre-asymptotic aspects are caused by spatial coherence in the velocity field and by its heterogeneity. For the Lagrangian perspective of particle displacements, the causes of pre-asymptotic, non-Fickian transport are skewed velocity distribution, statistical dependencies between subsequent increments of particle positions (memory) and dependence between the x, y and z-components of particle increments. Valid simulation frameworks should account for these factors. We propose a particle tracking random walk (PTRW) simulation technique that can use empirical pore-space velocity distributions as input, enforces memory between subsequent random walk steps, and considers cross dependence. Thus, it is able to simulate pre-asymptotic non-Fickian transport phenomena. Our PTRW framework contains an advection/dispersion term plus a diffusion term. The advection/dispersion term produces time-series of particle increments from the velocity CDFs. These time series are equipped with memory by enforcing that the CDF values of subsequent velocities change only slightly. The latter is achieved through a random walk on the axis of CDF values between 0 and 1. The virtual diffusion coefficient for that random walk is our only fitting parameter. Cross-dependence can be enforced by constraining the random walk to certain combinations of CDF values between the three velocity components in x, y and z. We will show that this modelling framework is capable of simulating non-Fickian transport by comparison with a pore-scale transport simulation and we analyze the approach to asymptotic behavior.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ali, Melkamu; Ye, Sheng; Li, Hongyi
2014-07-19
Subsurface stormflow is an important component of the rainfall-runoff response, especially in steep forested regions. However; its contribution is poorly represented in current generation of land surface hydrological models (LSMs) and catchment-scale rainfall-runoff models. The lack of physical basis of common parameterizations precludes a priori estimation (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global models. This paper is aimed at deriving physically based parameterizations of the storage-discharge relationship relating to subsurface flow. These parameterizations are derived through a two-step up-scaling procedure: firstly, through simulations with a physically based (Darcian) subsurfacemore » flow model for idealized three dimensional rectangular hillslopes, accounting for within-hillslope random heterogeneity of soil hydraulic properties, and secondly, through subsequent up-scaling to the catchment scale by accounting for between-hillslope and within-catchment heterogeneity of topographic features (e.g., slope). These theoretical simulation results produced parameterizations of the storage-discharge relationship in terms of soil hydraulic properties, topographic slope and their heterogeneities, which were consistent with results of previous studies. Yet, regionalization of the resulting storage-discharge relations across 50 actual catchments in eastern United States, and a comparison of the regionalized results with equivalent empirical results obtained on the basis of analysis of observed streamflow recession curves, revealed a systematic inconsistency. It was found that the difference between the theoretical and empirically derived results could be explained, to first order, by climate in the form of climatic aridity index. This suggests a possible codependence of climate, soils, vegetation and topographic properties, and suggests that subsurface flow parameterization needed for ungauged locations must account for both the physics of flow in heterogeneous landscapes, and the co-dependence of soil and topographic properties with climate, including possibly the mediating role of vegetation.« less
Jang, J-Y; Chang, Y R; Kim, S-W; Choi, S H; Park, S J; Lee, S E; Lim, C-S; Kang, M J; Lee, H; Heo, J S
2016-05-01
There is no consensus on the best method of preventing postoperative pancreatic fistula (POPF) after pancreaticoduodenectomy (PD). This multicentre, parallel group, randomized equivalence trial investigated the effect of two ways of pancreatic stenting after PD on the rate of POPF. Patients undergoing elective PD or pylorus-preserving PD with duct-to-mucosa pancreaticojejunostomy were enrolled from four tertiary referral hospitals. Randomization was stratified according to surgeon with a 1 : 1 allocation ratio to avoid any related technical factors. The primary endpoint was clinically relevant POPF rate. Secondary endpoints were nutritional index, remnant pancreatic volume, long-term complications and quality of life 2 years after PD. A total of 328 patients were randomized to the external (164 patients) or internal (164) stent group between August 2010 and January 2014. The rates of clinically relevant POPF were 24·4 per cent in the external and 18·9 per cent in the internal stent group (risk difference 5·5 per cent). As the 90 per cent confidence interval (-2·0 to 13·0 per cent) did not fall within the predefined equivalence limits (-10 to 10 per cent), the clinically relevant POPF rates in the two groups were not equivalent. Similar results were observed for patients with soft pancreatic texture and high fistula risk score. Other postoperative outcomes were comparable between the two groups. Five stent-related complications occurred in the external stent group. Multivariable analysis revealed that soft pancreatic texture, non-pancreatic disease and high body mass index (23·3 kg/m 2 or above) predicted clinically relevant POPF. External stenting after PD was associated with a higher rate of clinically relevant POPF than internal stenting. Registration number: NCT01023594 (https://www.clinicaltrials.gov). © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
SOLAR CYCLE 25: ANOTHER MODERATE CYCLE?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cameron, R. H.; Schüssler, M.; Jiang, J., E-mail: cameron@mps.mpg.de
2016-06-01
Surface flux transport simulations for the descending phase of Cycle 24 using random sources (emerging bipolar magnetic regions) with empirically determined scatter of their properties provide a prediction of the axial dipole moment during the upcoming activity minimum together with a realistic uncertainty range. The expectation value for the dipole moment around 2020 (2.5 ± 1.1 G) is comparable to that observed at the end of Cycle 23 (about 2 G). The empirical correlation between the dipole moment during solar minimum and the strength of the subsequent cycle thus suggests that Cycle 25 will be of moderate amplitude, not muchmore » higher than that of the current cycle. However, the intrinsic uncertainty of such predictions resulting from the random scatter of the source properties is considerable and fundamentally limits the reliability with which such predictions can be made before activity minimum is reached.« less
Auxiliary Parameter MCMC for Exponential Random Graph Models
NASA Astrophysics Data System (ADS)
Byshkin, Maksym; Stivala, Alex; Mira, Antonietta; Krause, Rolf; Robins, Garry; Lomi, Alessandro
2016-11-01
Exponential random graph models (ERGMs) are a well-established family of statistical models for analyzing social networks. Computational complexity has so far limited the appeal of ERGMs for the analysis of large social networks. Efficient computational methods are highly desirable in order to extend the empirical scope of ERGMs. In this paper we report results of a research project on the development of snowball sampling methods for ERGMs. We propose an auxiliary parameter Markov chain Monte Carlo (MCMC) algorithm for sampling from the relevant probability distributions. The method is designed to decrease the number of allowed network states without worsening the mixing of the Markov chains, and suggests a new approach for the developments of MCMC samplers for ERGMs. We demonstrate the method on both simulated and actual (empirical) network data and show that it reduces CPU time for parameter estimation by an order of magnitude compared to current MCMC methods.
Almost sure convergence in quantum spin glasses
DOE Office of Scientific and Technical Information (OSTI.GOV)
Buzinski, David, E-mail: dab197@case.edu; Meckes, Elizabeth, E-mail: elizabeth.meckes@case.edu
2015-12-15
Recently, Keating, Linden, and Wells [Markov Processes Relat. Fields 21(3), 537-555 (2015)] showed that the density of states measure of a nearest-neighbor quantum spin glass model is approximately Gaussian when the number of particles is large. The density of states measure is the ensemble average of the empirical spectral measure of a random matrix; in this paper, we use concentration of measure and entropy techniques together with the result of Keating, Linden, and Wells to show that in fact the empirical spectral measure of such a random matrix is almost surely approximately Gaussian itself with no ensemble averaging. We alsomore » extend this result to a spherical quantum spin glass model and to the more general coupling geometries investigated by Erdős and Schröder [Math. Phys., Anal. Geom. 17(3-4), 441–464 (2014)].« less
Calibration of semi-stochastic procedure for simulating high-frequency ground motions
Seyhan, Emel; Stewart, Jonathan P.; Graves, Robert
2013-01-01
Broadband ground motion simulation procedures typically utilize physics-based modeling at low frequencies, coupled with semi-stochastic procedures at high frequencies. The high-frequency procedure considered here combines deterministic Fourier amplitude spectra (dependent on source, path, and site models) with random phase. Previous work showed that high-frequency intensity measures from this simulation methodology attenuate faster with distance and have lower intra-event dispersion than in empirical equations. We address these issues by increasing crustal damping (Q) to reduce distance attenuation bias and by introducing random site-to-site variations to Fourier amplitudes using a lognormal standard deviation ranging from 0.45 for Mw < 7 to zero for Mw 8. Ground motions simulated with the updated parameterization exhibit significantly reduced distance attenuation bias and revised dispersion terms are more compatible with those from empirical models but remain lower at large distances (e.g., > 100 km).
Describing spatial pattern in stream networks: A practical approach
Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.
2005-01-01
The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.
A geostatistical approach for describing spatial pattern in stream networks
Ganio, L.M.; Torgersen, C.E.; Gresswell, R.E.
2005-01-01
The shape and configuration of branched networks influence ecological patterns and processes. Recent investigations of network influences in riverine ecology stress the need to quantify spatial structure not only in a two-dimensional plane, but also in networks. An initial step in understanding data from stream networks is discerning non-random patterns along the network. On the other hand, data collected in the network may be spatially autocorrelated and thus not suitable for traditional statistical analyses. Here we provide a method that uses commercially available software to construct an empirical variogram to describe spatial pattern in the relative abundance of coastal cutthroat trout in headwater stream networks. We describe the mathematical and practical considerations involved in calculating a variogram using a non-Euclidean distance metric to incorporate the network pathway structure in the analysis of spatial variability, and use a non-parametric technique to ascertain if the pattern in the empirical variogram is non-random.
Estimation of treatment effect in a subpopulation: An empirical Bayes approach.
Shen, Changyu; Li, Xiaochun; Jeong, Jaesik
2016-01-01
It is well recognized that the benefit of a medical intervention may not be distributed evenly in the target population due to patient heterogeneity, and conclusions based on conventional randomized clinical trials may not apply to every person. Given the increasing cost of randomized trials and difficulties in recruiting patients, there is a strong need to develop analytical approaches to estimate treatment effect in subpopulations. In particular, due to limited sample size for subpopulations and the need for multiple comparisons, standard analysis tends to yield wide confidence intervals of the treatment effect that are often noninformative. We propose an empirical Bayes approach to combine both information embedded in a target subpopulation and information from other subjects to construct confidence intervals of the treatment effect. The method is appealing in its simplicity and tangibility in characterizing the uncertainty about the true treatment effect. Simulation studies and a real data analysis are presented.
Gallagher, Anthony G; Seymour, Neal E; Jordan-Black, Julie-Anne; Bunting, Brendan P; McGlade, Kieran; Satava, Richard Martin
2013-06-01
We assessed the effectiveness of ToT from VR laparoscopic simulation training in 2 studies. In a second study, we also assessed the TER. ToT is a detectable performance improvement between equivalent groups, and TER is the observed percentage performance differences between 2 matched groups carrying out the same task but with 1 group pretrained on VR simulation. Concordance between simulated and in-vivo procedure performance was also assessed. Prospective, randomized, and blinded. In Study 1, experienced laparoscopic surgeons (n = 195) and in Study 2 laparoscopic novices (n = 30) were randomized to either train on VR simulation before completing an equivalent real-world task or complete the real-world task only. Experienced laparoscopic surgeons and novices who trained on the simulator performed significantly better than their controls, thus demonstrating ToT. Their performance showed a TER between 7% and 42% from the virtual to the real tasks. Simulation training impacted most on procedural error reduction in both studies (32-42%). The correlation observed between the VR and real-world task performance was r > 0·96 (Study 2). VR simulation training offers a powerful and effective platform for training safer skills.
A Whole Word and Number Reading Machine Based on Two Dimensional Low Frequency Fourier Transforms
1990-12-01
they are energy normalized. The normalization process accounts for brightness variations and is equivalent to graphing each 2DFT onto the surface of an n...determined empirically (trial and error). Each set is energy normalized based on the number of coefficients within the set. Therefore, the actual...using the 6 font group case with the top 1000 words, where the energy has been renormalized based on the particular number of coefficients being used
Implementation details of the coupled QMR algorithm
NASA Technical Reports Server (NTRS)
Freund, Roland W.; Nachtigal, Noel M.
1992-01-01
The original quasi-minimal residual method (QMR) relies on the three-term look-ahead Lanczos process, to generate basis vectors for the underlying Krylov subspaces. However, empirical observations indicate that, in finite precision arithmetic, three-term vector recurrences are less robust than mathematically equivalent coupled two-term recurrences. Therefore, we recently proposed a new implementation of the QMR method based on a coupled two-term look-ahead Lanczos procedure. In this paper, we describe implementation details of this coupled QMR algorithm, and we present results of numerical experiments.
Compressive Properties of Extruded Polytetrafluoroethylene
2007-07-01
against equivalent temperature ( Tmap ) at a single strain rate (3map). This is a pragmatic, empirically based line- arization and extension to large strains...one of the strain rates that was used in the experimental program, and in this case two rates were used: 0.1 s1 and 3200 s1. The value Tmap , is...defined as Tmap ¼ Texp þA log _3map log _3exp ð11Þ where the subscript exp indicates the experimental values of strain rate and temperature. A
Simultaneous confidence bands for Cox regression from semiparametric random censorship.
Mondal, Shoubhik; Subramanian, Sundarraman
2016-01-01
Cox regression is combined with semiparametric random censorship models to construct simultaneous confidence bands (SCBs) for subject-specific survival curves. Simulation results are presented to compare the performance of the proposed SCBs with the SCBs that are based only on standard Cox. The new SCBs provide correct empirical coverage and are more informative. The proposed SCBs are illustrated with two real examples. An extension to handle missing censoring indicators is also outlined.
Infusion of solutions of pre-irradiated components in rats.
Pappas, Georgina; Arnaud, Francoise; Haque, Ashraful; Kino, Tomoyuki; Facemire, Paul; Carroll, Erica; Auker, Charles; McCarron, Richard; Scultetus, Anke
2016-06-01
The objective of this study was to conduct a 14-day toxicology assessment for intravenous solutions prepared from irradiated resuscitation fluid components and sterile water. Healthy Sprague Dawley rats (7-10/group) were instrumented and randomized to receive one of the following Field IntraVenous Resuscitation (FIVR) or commercial fluids; Normal Saline (NS), Lactated Ringer's, 5% Dextrose in NS. Daily clinical observation, chemistry and hematology on days 1,7,14, and urinalysis on day 14 were evaluated for equivalence using a two sample t-test (p<0.05). A board-certified pathologist evaluated organ histopathology on day 14. Equivalence was established for all observation parameters, lactate, sodium, liver enzymes, creatinine, WBC and differential, and urinalysis values. Lack of equivalence for hemoglobin (p=0.055), pH (p=0.0955), glucose (p=0.0889), Alanine-Aminotransferase (p=0.1938), albumin (p=0.1311), and weight (p=0.0555, p=0.1896), was deemed not clinically relevant due to means within physiologically normal ranges. Common microscopic findings randomly distributed among animals of all groups were endocarditis/myocarditis and pulmonary lesions. These findings are consistent with complications due to long-term catheter use and suggest no clinically relevant differences in end-organ toxicity between animals infused with FIVR versus commercial fluids. Copyright © 2016 Elsevier GmbH. All rights reserved.
Association between Refractive Errors and Ocular Biometry in Iranian Adults
Hashemi, Hassan; Khabazkhoob, Mehdi; Emamian, Mohammad Hassan; Shariati, Mohammad; Miraftab, Mohammad; Yekta, Abbasali; Ostadimoghaddam, Hadi; Fotouhi, Akbar
2015-01-01
Purpose: To investigate the association between ocular biometrics such as axial length (AL), anterior chamber depth (ACD), lens thickness (LT), vitreous chamber depth (VCD) and corneal power (CP) with different refractive errors. Methods: In a cross-sectional study on the 40 to 64-year-old population of Shahroud, random cluster sampling was performed. Ocular biometrics were measured using the Allegro Biograph (WaveLight AG, Erlangen, Germany) for all participants. Refractive errors were determined using cycloplegic refraction. Results: In the first model, the strongest correlations were found between spherical equivalent with axial length and corneal power. Spherical equivalent was strongly correlated with axial length in high myopic and high hyperopic cases, and with corneal power in high hyperopic cases; 69.5% of variability in spherical equivalent was attributed to changes in these variables. In the second model, the correlations between vitreous chamber depth and corneal power with spherical equivalent were stronger in myopes than hyperopes, while the correlations between lens thickness and anterior chamber depth with spherical equivalent were stronger in hyperopic cases than myopic ones. In the third model, anterior chamber depth + lens thickness correlated with spherical equivalent only in moderate and severe cases of hyperopia, and this index was not correlated with spherical equivalent in moderate to severe myopia. Conclusion: In individuals aged 40-64 years, corneal power and axial length make the greatest contribution to spherical equivalent in high hyperopia and high myopia. Anterior segment biometric components have a more important role in hyperopia than myopia. PMID:26730304
NASA Technical Reports Server (NTRS)
Boyce, L.
1992-01-01
A probabilistic general material strength degradation model has been developed for structural components of aerospace propulsion systems subjected to diverse random effects. The model has been implemented in two FORTRAN programs, PROMISS (Probabilistic Material Strength Simulator) and PROMISC (Probabilistic Material Strength Calibrator). PROMISS calculates the random lifetime strength of an aerospace propulsion component due to as many as eighteen diverse random effects. Results are presented in the form of probability density functions and cumulative distribution functions of lifetime strength. PROMISC calibrates the model by calculating the values of empirical material constants.
Small-World Network Spectra in Mean-Field Theory
NASA Astrophysics Data System (ADS)
Grabow, Carsten; Grosskinsky, Stefan; Timme, Marc
2012-05-01
Collective dynamics on small-world networks emerge in a broad range of systems with their spectra characterizing fundamental asymptotic features. Here we derive analytic mean-field predictions for the spectra of small-world models that systematically interpolate between regular and random topologies by varying their randomness. These theoretical predictions agree well with the actual spectra (obtained by numerical diagonalization) for undirected and directed networks and from fully regular to strongly random topologies. These results may provide analytical insights to empirically found features of dynamics on small-world networks from various research fields, including biology, physics, engineering, and social science.
NASA Astrophysics Data System (ADS)
Wang, Qing; Dong, Chuang; Liaw, Peter K.
2015-08-01
Structural stabilities of β-Ti alloys are generally investigated by an empirical Mo equivalent, which quantifies the stability contribution of each alloying element, M, in comparison to that of the major β-Ti stabilizer, Mo. In the present work, a new Mo equivalent (Moeq)Q is proposed, which uses the slopes of the boundary lines between the β and ( α + β) phase zones in binary Ti-M phase diagrams. This (Moeq)Q reflects a simple fact that the β-Ti stability is enhanced, when the β phase zone is enlarged by a β-Ti stabilizer. It is expressed as (Moeq)Q = 1.0 Mo + 0.74 V + 1.01 W + 0.23 Nb + 0.30 Ta + 1.23 Fe + 1.10 Cr + 1.09 Cu + 1.67 Ni + 1.81 Co + 1.42 Mn + 0.38 Sn + 0.34 Zr + 0.99 Si - 0.57 Al (at. pct), where the equivalent coefficient of each element is the slope ratio of the [ β/( α + β)] boundary line of the binary Ti-M phase diagram to that of the Ti-Mo. This (Moeq)Q is shown to reliably characterize the critical stability limit of multi-component β-Ti alloys with low Young's moduli, where the critical lower limit for β stabilization is (Moeq)Q = 6.25 at. pct or 11.8 wt pct Mo.
Safety and efficacy of overnight orthokeratology in myopic children.
Mika, Renée; Morgan, Bruce; Cron, Michael; Lotoczky, Josh; Pole, John
2007-05-01
This prospective case series was conducted to describe the safety and efficacy of orthokeratology with the Emerald Contact Lens for Overnight Orthokeratology (Oprifocon A; Euclid Systems Corporation, Herndon, Virginia) among young myopes. Twenty subjects (ages 10 to 16) were enrolled in the 6-month pilot study. Subjects were fit empirically with overnight orthokeratology lenses and evaluated at 1 day, 1 week, 1 month, 2 months, 3 months, and 6 months. Sixteen subjects completed the study. The mean baseline spherical equivalent refraction (SER) was -2.06 diopters (D) (+/-0.75). The mean SER at 6 months was -0.16 D (+/-0.38). The mean baseline uncorrected acuity was 0.78 (+/-0.28) logarithmic minimum angle of resolution (logMAR) equivalent (20/100 Snellen). The mean logMAR equivalent at 6 months was -0.03 +/- 0.12 (<20/20 Snellen). On average, 40% of eyes showed some type of corneal staining between the 1-week and 6-month visits. No serious adverse events occurred during the study. In contrast to previously published studies that reported maximum results at 2 weeks, subjects reached maximum reduction in myopia at the 1-week visit and, on average, obtained a 92.2% reduction in spherical equivalent refractive error at 6 months. This pilot study lends to a growing body of evidence that short-term correction of mild to moderate myopia with overnight orthokeratology is safe and efficacious in children and adolescents.
Realistic Subsurface Anomaly Discrimination Using Electromagnetic Induction and an SVM Classifier
NASA Astrophysics Data System (ADS)
Pablo Fernández, Juan; Shubitidze, Fridon; Shamatava, Irma; Barrowes, Benjamin E.; O'Neill, Kevin
2010-12-01
The environmental research program of the United States military has set up blind tests for detection and discrimination of unexploded ordnance. One such test consists of measurements taken with the EM-63 sensor at Camp Sibert, AL. We review the performance on the test of a procedure that combines a field-potential (HAP) method to locate targets, the normalized surface magnetic source (NSMS) model to characterize them, and a support vector machine (SVM) to classify them. The HAP method infers location from the scattered magnetic field and its associated scalar potential, the latter reconstructed using equivalent sources. NSMS replaces the target with an enclosing spheroid of equivalent radial magnetization whose integral it uses as a discriminator. SVM generalizes from empirical evidence and can be adapted for multiclass discrimination using a voting system. Our method identifies all potentially dangerous targets correctly and has a false-alarm rate of about 5%.
NASA Astrophysics Data System (ADS)
Dmitrenko, Artur V.
2017-11-01
The stochastic equations of continuum are used for determining the heat transfer coefficients. As a result, the formulas for Nusselt (Nu) number dependent on the turbulence intensity and scale instead of only on the Reynolds (Peclet) number are proposed for the classic flows of a nonisothermal fluid in a round smooth tube. It is shown that the new expressions for the classical heat transfer coefficient Nu, which depend only on the Reynolds number, should be obtained from these new general formulas if to use the well-known experimental data for the initial turbulence. It is found that the limitations of classical empirical and semiempirical formulas for heat transfer coefficients and their deviation from the experimental data depend on different parameters of initial fluctuations in the flow for different experiments in a wide range of Reynolds or Peclet numbers. Based on these new dependences, it is possible to explain that the differences between the experimental results for the fixed Reynolds or Peclet numbers are caused by the difference in values of flow fluctuations for each experiment instead of only due to the systematic error in the experiment processing. Accordingly, the obtained general dependences of Nu for a smooth round tube can serve as the basis for clarifying the experimental results and empirical formulas used for continuum flows in various power devices. Obtained results show that both for isothermal and for nonisothermal flows, the reason for the process of transition from a deterministic state into a turbulent one is determined by the physical law of equivalence of measures between them. Also the theory of stochastic equations and the law of equivalence of measures could determine mechanics which is basis in different phenomena of self-organization and chaos theory.
Developing a Learning Algorithm-Generated Empirical Relaxer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mitchell, Wayne; Kallman, Josh; Toreja, Allen
2016-03-30
One of the main difficulties when running Arbitrary Lagrangian-Eulerian (ALE) simulations is determining how much to relax the mesh during the Eulerian step. This determination is currently made by the user on a simulation-by-simulation basis. We present a Learning Algorithm-Generated Empirical Relaxer (LAGER) which uses a regressive random forest algorithm to automate this decision process. We also demonstrate that LAGER successfully relaxes a variety of test problems, maintains simulation accuracy, and has the potential to significantly decrease both the person-hours and computational hours needed to run a successful ALE simulation.
Doing better by getting worse: posthypnotic amnesia improves random number generation.
Terhune, Devin Blair; Brugger, Peter
2011-01-01
Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.
Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation
Terhune, Devin Blair; Brugger, Peter
2011-01-01
Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022
Ausín, Berta; Muñoz, Manuel; Martín, Teresa; Pérez-Santos, Eloísa; Castellanos, Miguel Ángel
2018-01-08
The UCLA LS-R is the most extensively used scale to assess loneliness. However, few studies examine the scale's use on older individuals. The goal of the study is to analyse the suitability of the scale´s structure for assessing older individuals. The UCLA LS-R scale was administered to a random sample of 409 community-dwelling residents of Madrid (53% women) aged 65-84 years (obtained from the MentDis_ICF65+ study). Confirmatory factor analysis was used to assess the factor structure of the UCLA LS-R. The internal consistency of the scale obtained a Cronbach's alpha of .85. All the analysed models of factor structure of the UCLA LS-R achieved a fairly good fit and RMSEA values over .80. The models that best fit the empirical data are those of Hojat (1982) and Borges et al. (2008). The data suggest an equivalent effectiveness of UCLA LS-R in adults under 65 and over 65, which may indicate a similar structure of the loneliness construct in both populations. This outcome is consistent with the idea that loneliness has two dimensions: emotional loneliness and social loneliness. The use of short measures that are easy to apply and interpret should help primary care professionals identify loneliness problems in older individuals sooner and more accurately.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Psychiatric Resident and Attending Diagnostic and Prescribing Practices
ERIC Educational Resources Information Center
Tripp, Adam C.; Schwartz, Thomas L.
2008-01-01
Objective: This study investigates whether two patient population groups, under resident or attending treatment, are equivalent or different in the distribution of patient characteristics, diagnoses, or pharmacotherapy. Methods: Demographic data, psychiatric diagnoses, and pharmacotherapy data were collected for 100 random patient charts of…
Kanematsu, Nobuyuki
2009-03-07
Dose calculation for radiotherapy with protons and heavier ions deals with a large volume of path integrals involving a scattering power of body tissue. This work provides a simple model for such demanding applications. There is an approximate linearity between RMS end-point displacement and range of incident particles in water, empirically found in measurements and detailed calculations. This fact was translated into a simple linear formula, from which the scattering power that is only inversely proportional to the residual range was derived. The simplicity enabled the analytical formulation for ions stopping in water, which was designed to be equivalent with the extended Highland model and agreed with measurements within 2% or 0.02 cm in RMS displacement. The simplicity will also improve the efficiency of numerical path integrals in the presence of heterogeneity.
NASA Astrophysics Data System (ADS)
Ghysels, M.; Mondelain, D.; Kassi, S.; Nikitin, A. V.; Rey, M.; Campargue, A.
2018-07-01
The methane absorption spectrum is studied at 297 K and 80 K in the center of the Tetradecad between 5695 and 5850 cm-1. The spectra are recorded by differential absorption spectroscopy (DAS) with a noise equivalent absorption of about αmin≈ 1.5 × 10-7 cm-1. Two empirical line lists are constructed including about 4000 and 2300 lines at 297 K and 80 K, respectively. Lines due to 13CH4 present in natural abundance were identified by comparison with a spectrum of pure 13CH4 recorded in the same temperature conditions. About 1700 empirical values of the lower state energy level, Eemp, were derived from the ratios of the line intensities at 80 K and 296 K. They provide accurate temperature dependence for most of the absorption in the region (93% and 82% at 80 K and 296 K, respectively). The quality of the derived empirical values is illustrated by the clear propensity of the corresponding lower state rotational quantum number, Jemp, to be close to integer values. Using an effective Hamiltonian model derived from a previously published ab initio potential energy surface, about 2060 lines are rovibrationnally assigned, adding about 1660 new assignments to those provided in the HITRAN database for 12CH4 in the region.
Testing the strong equivalence principle with the triple pulsar PSR J 0337 +1715
NASA Astrophysics Data System (ADS)
Shao, Lijing
2016-04-01
Three conceptually different masses appear in equations of motion for objects under gravity, namely, the inertial mass, mI , the passive gravitational mass, mP, and the active gravitational mass, mA. It is assumed that, for any objects, mI=mP=mA in the Newtonian gravity, and mI=mP in the Einsteinian gravity, oblivious to objects' sophisticated internal structure. Empirical examination of the equivalence probes deep into gravity theories. We study the possibility of carrying out new tests based on pulsar timing of the stellar triple system, PSR J 0337 +1715 . Various machine-precision three-body simulations are performed, from which, the equivalence-violating parameters are extracted with Markov chain Monte Carlo sampling that takes full correlations into account. We show that the difference in masses could be probed to 3 ×1 0-8 , improving the current constraints from lunar laser ranging on the post-Newtonian parameters that govern violations of mP=mI and mA=mP by thousands and millions, respectively. The test of mP=mA would represent the first test of Newton's third law with compact objects.
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront
Martínez-Finkelshtein, Andreí
2017-01-01
Purpose To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. Methods A sphere is fitted to the ocular wavefront at the center and at a variable distance, t. The optimal fitting distance, topt, is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r0, and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. Results For pupil radii r0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size. PMID:29104804
New Objective Refraction Metric Based on Sphere Fitting to the Wavefront.
Jaskulski, Mateusz; Martínez-Finkelshtein, Andreí; López-Gil, Norberto
2017-01-01
To develop an objective refraction formula based on the ocular wavefront error (WFE) expressed in terms of Zernike coefficients and pupil radius, which would be an accurate predictor of subjective spherical equivalent (SE) for different pupil sizes. A sphere is fitted to the ocular wavefront at the center and at a variable distance, t . The optimal fitting distance, t opt , is obtained empirically from a dataset of 308 eyes as a function of objective refraction pupil radius, r 0 , and used to define the formula of a new wavefront refraction metric (MTR). The metric is tested in another, independent dataset of 200 eyes. For pupil radii r 0 ≤ 2 mm, the new metric predicts the equivalent sphere with similar accuracy (<0.1D), however, for r 0 > 2 mm, the mean error of traditional metrics can increase beyond 0.25D, and the MTR remains accurate. The proposed metric allows clinicians to obtain an accurate clinical spherical equivalent value without rescaling/refitting of the wavefront coefficients. It has the potential to be developed into a metric which will be able to predict full spherocylindrical refraction for the desired illumination conditions and corresponding pupil size.
The Principle of General Tovariance
NASA Astrophysics Data System (ADS)
Heunen, C.; Landsman, N. P.; Spitters, B.
2008-06-01
We tentatively propose two guiding principles for the construction of theories of physics, which should be satisfied by a possible future theory of quantum gravity. These principles are inspired by those that led Einstein to his theory of general relativity, viz. his principle of general covariance and his equivalence principle, as well as by the two mysterious dogmas of Bohr's interpretation of quantum mechanics, i.e. his doctrine of classical concepts and his principle of complementarity. An appropriate mathematical language for combining these ideas is topos theory, a framework earlier proposed for physics by Isham and collaborators. Our principle of general tovariance states that any mathematical structure appearing in the laws of physics must be definable in an arbitrary topos (with natural numbers object) and must be preserved under so-called geometric morphisms. This principle identifies geometric logic as the mathematical language of physics and restricts the constructions and theorems to those valid in intuitionism: neither Aristotle's principle of the excluded third nor Zermelo's Axiom of Choice may be invoked. Subsequently, our equivalence principle states that any algebra of observables (initially defined in the topos Sets) is empirically equivalent to a commutative one in some other topos.
Critical Deposition Condition of CoNiCrAlY Cold Spray Based on Particle Deformation Behavior
NASA Astrophysics Data System (ADS)
Ichikawa, Yuji; Ogawa, Kazuhiro
2017-02-01
Previous research has demonstrated deposition of MCrAlY coating via the cold spray process; however, the deposition mechanism of cold spraying has not been clearly explained—only empirically described by impact velocity. The purpose of this study was to elucidate the critical deposit condition. Microscale experimental measurements of individual particle deposit dimensions were incorporated with numerical simulation to investigate particle deformation behavior. Dimensional parameters were determined from scanning electron microscopy analysis of focused ion beam-fabricated cross sections of deposited particles to describe the deposition threshold. From Johnson-Cook finite element method simulation results, there is a direct correlation between the dimensional parameters and the impact velocity. Therefore, the critical velocity can describe the deposition threshold. Moreover, the maximum equivalent plastic strain is also strongly dependent on the impact velocity. Thus, the threshold condition required for particle deposition can instead be represented by the equivalent plastic strain of the particle and substrate. For particle-substrate combinations of similar materials, the substrate is more difficult to deform. Thus, this study establishes that the dominant factor of particle deposition in the cold spray process is the maximum equivalent plastic strain of the substrate, which occurs during impact and deformation.
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Magnet/Hall-Effect Random-Access Memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.
1991-01-01
In proposed magnet/Hall-effect random-access memory (MHRAM), bits of data stored magnetically in Perm-alloy (or equivalent)-film memory elements and read out by using Hall-effect sensors to detect magnetization. Value of each bit represented by polarity of magnetization. Retains data for indefinite time or until data rewritten. Speed of Hall-effect sensors in MHRAM results in readout times of about 100 nanoseconds. Other characteristics include high immunity to ionizing radiation and storage densities of order 10(Sup6)bits/cm(Sup 2) or more.
De Oliveira, Gildasio S; Duncan, Kenyon; Fitzgerald, Paul; Nader, Antoun; Gould, Robert W; McCarthy, Robert J
2014-02-01
Few multimodal strategies to minimize postoperative pain and improve recovery have been examined in morbidly obese patients undergoing laparoscopic bariatric surgery. The main objective of this study was to evaluate the effect of systemic intraoperative lidocaine on postoperative quality of recovery when compared to saline. The study was a prospective randomized, double-blinded placebo-controlled clinical trial. Subjects undergoing laparoscopic bariatric surgery were randomized to receive lidocaine (1.5 mg/kg bolus followed by a 2 mg/kg/h infusion until the end of the surgical procedure) or the same volume of saline. The primary outcome was the quality of recovery 40 questionnaire at 24 h after surgery. Fifty-one subjects were recruited and 50 completed the study. The global QoR-40 scores at 24 h were greater in the lidocaine group median (IQR) of 165 (151 to 170) compared to the saline group, median (IQR) of 146 (130 to 169), P = 0.01. Total 24 h opioid consumption was lower in the lidocaine group, median (IQR) of 26 (19 to 46) mg IV morphine equivalents compared to the saline group, median (IQR) of 36 (24 to 65) mg IV morphine equivalents, P = 0.03. Linear regression demonstrated an inverse relationship between the total 24 h opioid consumption (IV morphine equivalents) and 24 h postoperative quality of recovery (P < 0.0001). Systemic lidocaine improves postoperative quality of recovery in patients undergoing laparoscopic bariatric surgery. Patients who received lidocaine had a lower opioid consumption which translated to a better quality of recovery.
Glickman, Marc; Gheissari, Ali; Money, Samuel; Martin, John; Ballard, Jeffrey L
2002-03-01
An experimental polymeric sealant (CoSeal [Cohesion Technologies, Palo Alto, Calif]) provides equivalent anastomotic sealing to Gelfoam (Upjohn, Kalamazoo, Mich)/thrombin during surgical placement of prosthetic vascular grafts. Randomized controlled trial. Nine university-affiliated medical centers. One hundred forty-eight patients scheduled for implantation of polytetrafluoroethylene grafts, mainly for infrainguinal revascularization procedures or the creation of dialysis access shunts, who were treated randomly with either an experimental intervention (n = 74) or control (n = 74). Following polytetrafluoroethylene graft placement, anastomotic suture hole bleeding was treated intraoperatively in all control subjects with Gelfoam/thrombin. Subjects in the experimental group had the polymeric sealant applied directly to the suture lines without concomitant manual compression. Primary treatment success was defined as the proportion of subjects in each group that achieved complete anastomotic sealing within 10 minutes. The proportion of subjects that achieved immediate sealing and the time required to fully inhibit suture hole bleeding also were compared between treatment groups. Overall 10-minute sealing success was equivalent (86% vs 80%; P =.29) between experimental and control subjects, respectively. However, subjects treated with CoSeal achieved immediate anastomotic sealing at more than twice the rate of subjects treated with Gelfoam/thrombin (47% vs 20%; P<.001). Consequently, the median time needed to inhibit bleeding in control subjects was more than 10 times longer than for experimental subjects (16.5 seconds vs 189.0 seconds; P =.01). Strikingly similar findings for all comparisons were observed separately for subgroups of subjects having infrainguinal bypass grafting and for those undergoing placement of dialysis access shunts. The experimental sealant offers equivalent anastomotic sealing performance compared with Gelfoam/thrombin, but it provides this desired effect in a significantly more rapid time frame.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Model Checking with Multi-Threaded IC3 Portfolios
2015-01-15
different runs varies randomly depending on the thread interleaving. The use of a portfolio of solvers to maximize the likelihood of a quick solution is...empirically show (cf. Sec. 5.2) that the predictions based on this formula have high accuracy. Note that each solver in the portfolio potentially searches...speedup of over 300. We also show that widening the proof search of ic3 by randomizing its SAT solver is not as effective as paral- lelization
Bayes to the Rescue: Continuous Positive Airway Pressure Has Less Mortality Than High-Flow Oxygen.
Modesto I Alapont, Vicent; Khemani, Robinder G; Medina, Alberto; Del Villar Guerra, Pablo; Molina Cambra, Alfred
2017-02-01
The merits of high-flow nasal cannula oxygen versus bubble continuous positive airway pressure are debated in children with pneumonia, with suggestions that randomized controlled trials are needed. In light of a previous randomized controlled trial showing a trend for lower mortality with bubble continuous positive airway pressure, we sought to determine the probability that a new randomized controlled trial would find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure through a "robust" Bayesian analysis. Sample data were extracted from the trial by Chisti et al, and requisite to "robust" Bayesian analysis, we specified three prior distributions to represent clinically meaningful assumptions. These priors (reference, pessimistic, and optimistic) were used to generate three scenarios to represent the range of possible hypotheses. 1) "Reference": we believe bubble continuous positive airway pressure and high-flow nasal cannula oxygen are equally effective with the same uninformative reference priors; 2) "Sceptic on high-flow nasal cannula oxygen": we believe that bubble continuous positive airway pressure is better than high-flow nasal cannula oxygen (bubble continuous positive airway pressure has an optimistic prior and high-flow nasal cannula oxygen has a pessimistic prior); and 3) "Enthusiastic on high-flow nasal cannula oxygen": we believe that high-flow nasal cannula oxygen is better than bubble continuous positive airway pressure (high-flow nasal cannula oxygen has an optimistic prior and bubble continuous positive airway pressure has a pessimistic prior). Finally, posterior empiric Bayesian distributions were obtained through 100,000 Markov Chain Monte Carlo simulations. In all three scenarios, there was a high probability for more death from high-flow nasal cannula oxygen compared with bubble continuous positive airway pressure (reference, 0.98; sceptic on high-flow nasal cannula oxygen, 0.982; enthusiastic on high-flow nasal cannula oxygen, 0.742). The posterior 95% credible interval on the difference in mortality identified a future randomized controlled trial would be extremely unlikely to find a mortality benefit for high-flow nasal cannula oxygen over bubble continuous positive airway pressure, regardless of the scenario. Interpreting these findings using the "range of practical equivalence" framework would recommend rejecting the hypothesis that high-flow nasal cannula oxygen is superior to bubble continuous positive airway pressure for these children. For children younger than 5 years with pneumonia, high-flow nasal cannula oxygen has higher mortality than bubble continuous positive airway pressure. A future randomized controlled trial in this population is unlikely to find high-flow nasal cannula oxygen superior to bubble continuous positive airway pressure.
A global empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, J. M.; van Oldenborgh, G. J.; Hawkins, E.; Suckling, E. B.
2015-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
An empirical system for probabilistic seasonal climate prediction
NASA Astrophysics Data System (ADS)
Eden, Jonathan; van Oldenborgh, Geert Jan; Hawkins, Ed; Suckling, Emma
2016-04-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a simple empirical system based on multiple linear regression for producing probabilistic forecasts of seasonal surface air temperature and precipitation across the globe. The global CO2-equivalent concentration is taken as the primary predictor; subsequent predictors, including large-scale modes of variability in the climate system and local-scale information, are selected on the basis of their physical relationship with the predictand. The focus given to the climate change signal as a source of skill and the probabilistic nature of the forecasts produced constitute a novel approach to global empirical prediction. Hindcasts for the period 1961-2013 are validated against observations using deterministic (correlation of seasonal means) and probabilistic (continuous rank probability skill scores) metrics. Good skill is found in many regions, particularly for surface air temperature and most notably in much of Europe during the spring and summer seasons. For precipitation, skill is generally limited to regions with known El Niño-Southern Oscillation (ENSO) teleconnections. The system is used in a quasi-operational framework to generate empirical seasonal forecasts on a monthly basis.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
NASA Technical Reports Server (NTRS)
Castles, Walter, Jr.; Gray, Robin B.
1951-01-01
The empirical relation between the induced velocity, thrust, and rate of vertical descent of a helicopter rotor was calculated from wind tunnel force tests on four model rotors by the application of blade-element theory to the measured values of the thrust, torque, blade angle, and equivalent free-stream rate of descent. The model tests covered the useful range of C(sub t)/sigma(sub e) (where C(sub t) is the thrust coefficient and sigma(sub e) is the effective solidity) and the range of vertical descent from hovering to descent velocities slightly greater than those for autorotation. The three bladed models, each of which had an effective solidity of 0.05 and NACA 0015 blade airfoil sections, were as follows: (1) constant-chord, untwisted blades of 3-ft radius; (2) untwisted blades of 3-ft radius having a 3/1 taper; (3) constant-chord blades of 3-ft radius having a linear twist of 12 degrees (washout) from axis of rotation to tip; and (4) constant-chord, untwisted blades of 2-ft radius. Because of the incorporation of a correction for blade dynamic twist and the use of a method of measuring the approximate equivalent free-stream velocity, it is believed that the data obtained from this program are more applicable to free-flight calculations than the data from previous model tests.
NASA Technical Reports Server (NTRS)
Castles, Walter, Jr; Gray, Robin B
1951-01-01
The empirical relation between the induced velocity, thrust, and rate of vertical descent of a helicopter rotor was calculated from wind tunnel force tests on four model rotors by the application of blade-element theory to the measured values of the thrust, torque, blade angle, and equivalent free-stream rate of descent. The model tests covered the useful range of C(sub t)/sigma(sub e) (where C(sub t) is the thrust coefficient and sigma(sub e) is the effective solidity) and the range of vertical descent from hovering to descent velocities slightly greater than those for autorotation. The three bladed models, each of which had an effective solidity of 0.05 and NACA 0015 blade airfoil sections, were as follows: (1) constant-chord, untwisted blades of 3-ft radius; (2) untwisted blades of 3-ft radius having a 3/1 taper; (3) constant-chord blades of 3-ft radius having a linear twist of 12 degrees (washout) from axis of rotation to tip; and (4) constant-chord, untwisted blades of 2-ft radius. Because of the incorporation of a correction for blade dynamic twist and the use of a method of measuring the approximate equivalent free-stream velocity, it is believed that the data obtained from this program are more applicable to free-flight calculations than the data from previous model tests.
How random is a random vector?
NASA Astrophysics Data System (ADS)
Eliazar, Iddo
2015-12-01
Over 80 years ago Samuel Wilks proposed that the "generalized variance" of a random vector is the determinant of its covariance matrix. To date, the notion and use of the generalized variance is confined only to very specific niches in statistics. In this paper we establish that the "Wilks standard deviation" -the square root of the generalized variance-is indeed the standard deviation of a random vector. We further establish that the "uncorrelation index" -a derivative of the Wilks standard deviation-is a measure of the overall correlation between the components of a random vector. Both the Wilks standard deviation and the uncorrelation index are, respectively, special cases of two general notions that we introduce: "randomness measures" and "independence indices" of random vectors. In turn, these general notions give rise to "randomness diagrams"-tangible planar visualizations that answer the question: How random is a random vector? The notion of "independence indices" yields a novel measure of correlation for Lévy laws. In general, the concepts and results presented in this paper are applicable to any field of science and engineering with random-vectors empirical data.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
Absorption and scattering by fractal aggregates and by their equivalent coated spheres
NASA Astrophysics Data System (ADS)
Kandilian, Razmig; Heng, Ri-Liang; Pilon, Laurent
2015-01-01
This paper demonstrates that the absorption and scattering cross-sections and the asymmetry factor of randomly oriented fractal aggregates of spherical monomers can be rapidly estimated as those of coated spheres with equivalent volume and average projected area. This was established for fractal aggregates with fractal dimension ranging from 2.0 to 3.0 and composed of up to 1000 monodisperse or polydisperse monomers with a wide range of size parameter and relative complex index of refraction. This equivalent coated sphere approximation was able to capture the effects of both multiple scattering and shading among constituent monomers on the integral radiation characteristics of the aggregates. It was shown to be superior to the Rayleigh-Debye-Gans approximation and to the equivalent coated sphere approximation proposed by Latimer. However, the scattering matrix element ratios of equivalent coated spheres featured large angular oscillations caused by internal reflection in the coating which were not observed in those of the corresponding fractal aggregates. Finally, the scattering phase function and the scattering matrix elements of aggregates with large monomer size parameter were found to have unique features that could be used in remote sensing applications.
Computer-Based Linguistic Analysis.
ERIC Educational Resources Information Center
Wright, James R.
Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
The distribution of genetic variance across phenotypic space and the response to selection.
Blows, Mark W; McGuigan, Katrina
2015-05-01
The role of adaptation in biological invasions will depend on the availability of genetic variation for traits under selection in the new environment. Although genetic variation is present for most traits in most populations, selection is expected to act on combinations of traits, not individual traits in isolation. The distribution of genetic variance across trait combinations can be characterized by the empirical spectral distribution of the genetic variance-covariance (G) matrix. Empirical spectral distributions of G from a range of trait types and taxa all exhibit a characteristic shape; some trait combinations have large levels of genetic variance, while others have very little genetic variance. In this study, we review what is known about the empirical spectral distribution of G and show how it predicts the response to selection across phenotypic space. In particular, trait combinations that form a nearly null genetic subspace with little genetic variance respond only inconsistently to selection. We go on to set out a framework for understanding how the empirical spectral distribution of G may differ from the random expectations that have been developed under random matrix theory (RMT). Using a data set containing a large number of gene expression traits, we illustrate how hypotheses concerning the distribution of multivariate genetic variance can be tested using RMT methods. We suggest that the relative alignment between novel selection pressures during invasion and the nearly null genetic subspace is likely to be an important component of the success or failure of invasion, and for the likelihood of rapid adaptation in small populations in general. © 2014 John Wiley & Sons Ltd.
Perceptions of randomized security schedules.
Scurich, Nicholas; John, Richard S
2014-04-01
Security of infrastructure is a major concern. Traditional security schedules are unable to provide omnipresent coverage; consequently, adversaries can exploit predictable vulnerabilities to their advantage. Randomized security schedules, which randomly deploy security measures, overcome these limitations, but public perceptions of such schedules have not been examined. In this experiment, participants were asked to make a choice between attending a venue that employed a traditional (i.e., search everyone) or a random (i.e., a probability of being searched) security schedule. The absolute probability of detecting contraband was manipulated (i.e., 1/10, 1/4, 1/2) but equivalent between the two schedule types. In general, participants were indifferent to either security schedule, regardless of the probability of detection. The randomized schedule was deemed more convenient, but the traditional schedule was considered fairer and safer. There were no differences between traditional and random schedule in terms of perceived effectiveness or deterrence. Policy implications for the implementation and utilization of randomized schedules are discussed. © 2013 Society for Risk Analysis.
A generalization of random matrix theory and its application to statistical physics.
Wang, Duan; Zhang, Xin; Horvatic, Davor; Podobnik, Boris; Eugene Stanley, H
2017-02-01
To study the statistical structure of crosscorrelations in empirical data, we generalize random matrix theory and propose a new method of cross-correlation analysis, known as autoregressive random matrix theory (ARRMT). ARRMT takes into account the influence of auto-correlations in the study of cross-correlations in multiple time series. We first analytically and numerically determine how auto-correlations affect the eigenvalue distribution of the correlation matrix. Then we introduce ARRMT with a detailed procedure of how to implement the method. Finally, we illustrate the method using two examples taken from inflation rates for air pressure data for 95 US cities.
Tavakkolizadeh, Moein; Love‐Jones, Sarah; Patel, Nikunj K.; Gu, Jianwen Wendy; Bains, Amarpreet; Doan, Que; Moffitt, Michael
2017-01-01
Objective The PROCO RCT is a multicenter, double‐blind, crossover, randomized controlled trial (RCT) that investigated the effects of rate on analgesia in kilohertz frequency (1–10 kHz) spinal cord stimulation (SCS). Materials and Methods Patients were implanted with SCS systems and underwent an eight‐week search to identify the best location (“sweet spot”) of stimulation at 10 kHz within the searched region (T8–T11). An electronic diary (e‐diary) prompted patients for pain scores three times per day. Patients who responded to 10 kHz per e‐diary numeric rating scale (ED‐NRS) pain scores proceeded to double‐blind rate randomization. Patients received 1, 4, 7, and 10 kHz SCS at the same sweet spot found for 10 kHz in randomized order (four weeks at each frequency). For each frequency, pulse width and amplitude were titrated to optimize therapy. Results All frequencies provided equivalent pain relief as measured by ED‐NRS (p ≤ 0.002). However, mean charge per second differed across frequencies, with 1 kHz SCS requiring 60–70% less charge than higher frequencies (p ≤ 0.0002). Conclusions The PROCO RCT provides Level I evidence for equivalent pain relief from 1 to 10 kHz with appropriate titration of pulse width and amplitude. 1 kHz required significantly less charge than higher frequencies. PMID:29220121
Structure of a randomly grown 2-d network.
Ajazi, Fioralba; Napolitano, George M; Turova, Tatyana; Zaurbek, Izbassar
2015-10-01
We introduce a growing random network on a plane as a model of a growing neuronal network. The properties of the structure of the induced graph are derived. We compare our results with available data. In particular, it is shown that depending on the parameters of the model the system undergoes in time different phases of the structure. We conclude with a possible explanation of some empirical data on the connections between neurons. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; ...
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
NASA Astrophysics Data System (ADS)
Xu, M., III; Liu, X.
2017-12-01
In the past 60 years, both the runoff and sediment load in the Yellow River Basin showed significant decreasing trends owing to the influences of human activities and climate change. Quantifying the impact of each factor (e.g. precipitation, sediment trapping dams, pasture, terrace, etc.) on the runoff and sediment load is among the key issues to guide the implement of water and soil conservation measures, and to predict the variation trends in the future. Hundreds of methods have been developed for studying the runoff and sediment load in the Yellow River Basin. Generally, these methods can be classified into empirical methods and physical-based models. The empirical methods, including hydrological method, soil and water conservation method, etc., are widely used in the Yellow River management engineering. These methods generally apply the statistical analyses like the regression analysis to build the empirical relationships between the main characteristic variables in a river basin. The elasticity method extensively used in the hydrological research can be classified into empirical method as it is mathematically deduced to be equivalent with the hydrological method. Physical-based models mainly include conceptual models and distributed models. The conceptual models are usually lumped models (e.g. SYMHD model, etc.) and can be regarded as transition of empirical models and distributed models. Seen from the publications that less studies have been conducted applying distributed models than empirical models as the simulation results of runoff and sediment load based on distributed models (e.g. the Digital Yellow Integrated Model, the Geomorphology-Based Hydrological Model, etc.) were usually not so satisfied owing to the intensive human activities in the Yellow River Basin. Therefore, this study primarily summarizes the empirical models applied in the Yellow River Basin and theoretically analyzes the main causes for the significantly different results using different empirical researching methods. Besides, we put forward an assessment frame for the researching methods of the runoff and sediment load variations in the Yellow River Basin from the point of view of inputting data, model structure and result output. And the assessment frame was then applied in the Huangfuchuan River.
Greenberg, Rachel G; Benjamin, Daniel K; Gantz, Marie G; Cotten, C Michael; Stoll, Barbara J; Walsh, Michele C; Sánchez, Pablo J; Shankaran, Seetha; Das, Abhik; Higgins, Rosemary D; Miller, Nancy A; Auten, Kathy J; Walsh, Thomas J; Laptook, Abbot R; Carlo, Waldemar A; Kennedy, Kathleen A; Finer, Neil N; Duara, Shahnaz; Schibler, Kurt; Ehrenkranz, Richard A; Van Meurs, Krisa P; Frantz, Ivan D; Phelps, Dale L; Poindexter, Brenda B; Bell, Edward F; O'Shea, T Michael; Watterberg, Kristi L; Goldberg, Ronald N; Smith, P Brian
2012-08-01
To assess the impact of empiric antifungal therapy for invasive candidiasis on subsequent outcomes in premature infants. This was a cohort study of infants with a birth weight ≤ 1000 g receiving care at Neonatal Research Network sites. All infants had at least one positive culture for Candida. Empiric antifungal therapy was defined as receipt of a systemic antifungal on the day of or the day before the first positive culture for Candida was drawn. We created Cox proportional hazards and logistic regression models stratified on propensity score quartiles to determine the effect of empiric antifungal therapy on survival, time to clearance of infection, retinopathy of prematurity, bronchopulmonary dysplasia, end-organ damage, and neurodevelopmental impairment (NDI). A total of 136 infants developed invasive candidiasis. The incidence of death or NDI was lower in infants who received empiric antifungal therapy (19 of 38; 50%) compared with those who had not (55 of 86; 64%; OR, 0.27; 95% CI, 0.08-0.86). There was no significant difference between the groups for any single outcome or other combined outcomes. Empiric antifungal therapy was associated with increased survival without NDI. A prospective randomized trial of this strategy is warranted. Copyright © 2012 Mosby, Inc. All rights reserved.
Examining the impact of cell phone conversations on driving using meta-analytic techniques
DOT National Transportation Integrated Search
2006-01-01
Synopsis Younger and older drivers conversing on a hands-free cell phone were found to have slower responses to random braking by the vehicle ahead. Cell phone use slowed the younger drivers responses to an extent that they were equivalent t...
Profiles in driver distraction : effects of cell phone conversations on younger and older drivers
DOT National Transportation Integrated Search
2004-01-01
Synopsis Younger and older drivers conversing on a hands-free cell phone were found to have slower responses to random braking by the vehicle ahead. Cell phone use slowed the younger drivers responses to an extent that they were equivalent t...
The ED95 of Nalbuphine in Outpatient-Induced Abortion Compared to Equivalent Sufentanil.
Chen, Limei; Zhou, Yamei; Cai, Yaoyao; Bao, Nana; Xu, Xuzhong; Shi, Beibei
2018-04-07
This prospective study evaluated the 95% effective dose (ED 95 ) of nalbuphine in inhibiting body movement during outpatient-induced abortion and its clinical efficacy versus the equivalent of sufentanil. The study was divided into two parts. For the first part, voluntary first-trimester patients who needed induced abortions were recruited to measure the ED 95 of nalbuphine in inhibiting body movement during induced abortion using the sequential method (the Dixon up-and-down method). In the second part, this was a double-blind, randomized study. Sixty cases of first-trimester patients were recruited and were randomly divided into two groups (n = 30), including group N (nalbuphine at the ED 95 dose) and group S (sufentanil at an equivalent dose). Propofol was given to both groups as the sedative. The circulation, respiration and body movement of the two groups in surgery were observed. The amount of propofol, the awakening time, the time to leave the hospital and the analgesic effect were recorded. The ED 95 of nalbuphine in inhibiting body movement during painless surgical abortion was 0.128 mg/kg (95% confidence intervals 0.098-0.483 mg/kg). Both nalbuphine and the equivalent dose of sufentanil provided a good intraoperative and post-operative analgesic effect in outpatient-induced abortion. However, the post-operative morbidity of dizziness for nalbuphine was less than for sufentanil (p < 0.05), and the awakening time and the time to leave the hospital were significantly shorter than those of sufentanil (p < 0.05). Nalbuphine at 0.128 mg/kg was used in outpatient-induced abortion as an intraoperative and post-operative analgesic and showed a better effect compared with sufentanil. © 2018 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P
2001-02-01
Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.
Hanes, Vladimir; Chow, Vincent; Zhang, Nan; Markus, Richard
2017-05-01
This study compared the pharmacokinetic (PK) profiles of the proposed biosimilar ABP 980 and trastuzumab in healthy males. In this single-blind study, 157 healthy males were randomized 1:1:1 to a single 6 mg/kg intravenous infusion of ABP 980, FDA-licensed trastuzumab [trastuzumab (US)], or EU-authorized trastuzumab [trastuzumab (EU)]. Primary endpoints were area under the serum concentration-time curve from time 0 to infinity (AUC inf ) and maximum observed serum concentration (C max ). To establish equivalence, the geometric mean ratio (GMR) and 90% confidence interval (CI) for C max and AUC inf had to be within the equivalence criteria of 0.80-1.25. The GMRs and 90% CIs for C max and AUC inf , respectively, were: 1.04 (0.99-1.08) and 1.06 (1.00-1.12) for ABP 980 versus trastuzumab (US); 0.99 (0.95-1.03) and 1.00 (0.95-1.06) for ABP 980 versus trastuzumab (EU); and 0.96 (0.92-1.00) and 0.95 (0.90-1.01) for trastuzumab (US) versus trastuzumab (EU). All comparisons were within the equivalence criteria of 0.80-1.25. Treatment-emergent adverse events (TEAEs) were reported in 84.0, 75.0, and 78.2 of subjects in the ABP 980, trastuzumab (US), and trastuzumab (EU) groups, respectively. There were no deaths or TEAEs leading to study discontinuation and no binding or neutralizing anti-drug anti-bodies were detected. This study demonstrated the PK similarity of ABP 980 to both trastuzumab (US) and trastuzumab (EU), and of trastuzumab (US) to trastuzumab (EU). No differences in safety and tolerability between treatments were noted; no subject tested positive for binding anti-bodies.
Costello, John M; Dunbar-Masterson, Carolyn; Allan, Catherine K; Gauvreau, Kimberlee; Newburger, Jane W; McGowan, Francis X; Wessel, David L; Mayer, John E; Salvin, Joshua W; Dionne, Roger E; Laussen, Peter C
2014-07-01
We sought to determine whether empirical nesiritide or milrinone would improve the early postoperative course after Fontan surgery. We hypothesized that compared with milrinone or placebo, patients assigned to receive nesiritide would have improved early postoperative outcomes. In a single-center, randomized, double-blinded, placebo-controlled, multi-arm parallel-group clinical trial, patients undergoing primary Fontan surgery were assigned to receive nesiritide, milrinone, or placebo. A loading dose of study drug was administered on cardiopulmonary bypass followed by a continuous infusion for ≥12 hours and ≤5 days after cardiac intensive care unit admission. The primary outcome was days alive and out of the hospital within 30 days of surgery. Secondary outcomes included measures of cardiovascular function, renal function, resource use, and adverse events. Among 106 enrolled subjects, 35, 36, and 35 were randomized to the nesiritide, milrinone, and placebo groups, respectively, and all were analyzed based on intention to treat. Demographics, patient characteristics, and operative factors were similar among treatment groups. No significant treatment group differences were found for median days alive and out of the hospital within 30 days of surgery (nesiritide, 20 [minimum to maximum, 0-24]; milrinone, 18 [0-23]; placebo, 20 [0-23]; P=0.38). Treatment groups did not significantly differ in cardiac index, arrhythmias, peak lactate, inotropic scores, urine output, duration of mechanical ventilation, intensive care or chest tube drainage, or adverse events. Compared with placebo, empirical perioperative nesiritide or milrinone infusions are not associated with improved early clinical outcomes after Fontan surgery. http://www.clinicaltrials.gov. Unique identifier: NCT00543309. © 2014 American Heart Association, Inc.
Weinberg, Igor; Ronningstam, Elsa; Goldblatt, Mark J; Schechter, Mark; Wheelis, Joan; Maltsberger, John T
2010-06-01
Many reports of treatments for suicidal patients claim effectiveness in reducing suicidal behavior but fail to demonstrate which treatment interventions, or combinations thereof, diminish suicidality. In this study, treatment manuals for empirically supported psychological treatments for suicidal patients were examined to identify which interventions they had in common and which interventions were treatment-specific. Empirically supported treatments for suicidality were identified through a literature search of PsychLit and MEDLINE for the years 1970-2007, employing the following search strategy: [suicide OR parasuicide] AND [therapy OR psychotherapy OR treatment] AND [random OR randomized]. After identifying the reports on randomized controlled studies that tested effectiveness of different treatments, the reference list of each report was searched for further studies. Only reports published in English were included. To ensure that rated manuals actually correspond to the delivered and tested treatments, we included only treatment interventions with explicit adherence rating and scoring and with adequate adherence ratings in the published studies. Five manualized treatments demonstrating efficacy in reducing suicide risk were identified and were independently evaluated by raters using a list of treatment interventions. The common interventions included a clear treatment framework; a defined strategy for managing suicide crises; close attention to affect; an active, participatory therapist style; and use of exploratory and change-oriented interventions. Some treatments encouraged a multimodal approach and identification of suicidality as an explicit target behavior, and some concentrated on the patient-therapist relationship. Emphasis on interpretation and supportive interventions varied. Not all methods encouraged systematic support for therapists. This study identified candidate interventions for possible effectiveness in reducing suicidality. These interventions seem to address central characteristics of suicidal patients. Further studies are needed to confirm which interventions and which combinations thereof are most effective. 2010 Physicians Postgraduate Press, Inc.
Essays on pricing electricity and electricity derivatives in deregulated markets
NASA Astrophysics Data System (ADS)
Popova, Julia
2008-10-01
This dissertation is composed of four essays on the behavior of wholesale electricity prices and their derivatives. The first essay provides an empirical model that takes into account the spatial features of a transmission network on the electricity market. The spatial structure of the transmission grid plays a key role in determining electricity prices, but it has not been incorporated into previous empirical models. The econometric model in this essay incorporates a simple representation of the transmission system into a spatial panel data model of electricity prices, and also accounts for the effect of dynamic transmission system constraints on electricity market integration. Empirical results using PJM data confirm the existence of spatial patterns in electricity prices and show that spatial correlation diminishes as transmission lines become more congested. The second essay develops and empirically tests a model of the influence of natural gas storage inventories on the electricity forward premium. I link a model of the effect of gas storage constraints on the higher moments of the distribution of electricity prices to a model of the effect of those moments on the forward premium. Empirical results using PJM data support the model's predictions that gas storage inventories sharply reduce the electricity forward premium when demand for electricity is high and space-heating demand for gas is low. The third essay examines the efficiency of PJM electricity markets. A market is efficient if prices reflect all relevant information, so that prices follow a random walk. The hypothesis of random walk is examined using empirical tests, including the Portmanteau, Augmented Dickey-Fuller, KPSS, and multiple variance ratio tests. The results are mixed though evidence of some level of market efficiency is found. The last essay investigates the possibility that previous researchers have drawn spurious conclusions based on classical unit root tests incorrectly applied to wholesale electricity prices. It is well known that electricity prices exhibit both cyclicity and high volatility which varies through time. Results indicate that heterogeneity in unconditional variance---which is not detected by classical unit root tests---may contribute to the appearance of non-stationarity.
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Rotational characterization of methyl methacrylate: Internal dynamics and structure determination
NASA Astrophysics Data System (ADS)
Herbers, Sven; Wachsmuth, Dennis; Obenchain, Daniel A.; Grabow, Jens-Uwe
2018-01-01
Rotational constants, Watson's S centrifugal distortion coefficients, and internal rotation parameters of the two most stable conformers of methyl methacrylate were retrieved from the microwave spectrum. Splittings of rotational energy levels were caused by two non equivalent methyl tops. Constraining the centrifugal distortion coefficients and internal rotation parameters to the values of the main isotopologues, the rotational constants of all single substituted 13C and 18O isotopologues were determined. From these rotational constants the substitution structures and semi-empirical zero point structures of both conformers were precisely determined.
Model Comparisons For Space Solar Cell End-Of-Life Calculations
NASA Astrophysics Data System (ADS)
Messenger, Scott; Jackson, Eric; Warner, Jeffrey; Walters, Robert; Evans, Hugh; Heynderickx, Daniel
2011-10-01
Space solar cell end-of-life (EOL) calculations are performed over a wide range of space radiation environments for GaAs-based single and multijunction solar cell technologies. Two general semi-empirical approaches will used to generate these EOL calculation results: 1) the JPL equivalent fluence (EQFLUX) and 2) the NRL displacement damage dose (SCREAM). This paper also includes the first results using the Monte Carlo-based version of SCREAM, called MC- SCREAM, which is now freely available online as part of the SPENVIS suite of programs.
Künzel, R; Herdade, S B; Costa, P R; Terini, R A; Levenhagen, R S
2006-04-21
In this study, scattered x-ray distributions were produced by irradiating a tissue equivalent phantom under clinical mammographic conditions by using Mo/Mo, Mo/Rh and W/Rh anode/filter combinations, for 25 and 30 kV tube voltages. Energy spectra of the scattered x-rays have been measured with a Cd(0.9)Zn(0.1)Te (CZT) detector for scattering angles between 30 degrees and 165 degrees . Measurement and correction processes have been evaluated through the comparison between the values of the half-value layer (HVL) and air kerma calculated from the corrected spectra and measured with an ionization chamber in a nonclinical x-ray system with a W/Mo anode/filter combination. The shape of the corrected x-ray spectra measured in the nonclinical system was also compared with those calculated using semi-empirical models published in the literature. Scattered x-ray spectra measured in the clinical x-ray system have been characterized through the calculation of HVL and mean photon energy. Values of the air kerma, ambient dose equivalent and effective dose have been evaluated through the corrected x-ray spectra. Mean conversion coefficients relating the air kerma to the ambient dose equivalent and to the effective dose from the scattered beams for Mo/Mo, Mo/Rh and W/Rh anode/filter combinations were also evaluated. Results show that for the scattered radiation beams the ambient dose equivalent provides an overestimate of the effective dose by a factor of about 5 in the mammography energy range. These results can be used in the control of the dose limits around a clinical unit and in the calculation of more realistic protective shielding barriers in mammography.
Economic and ecological outcomes of flexible biodiversity offset systems.
Habib, Thomas J; Farr, Daniel R; Schneider, Richard R; Boutin, Stan
2013-12-01
The commonly expressed goal of biodiversity offsets is to achieve no net loss of specific biological features affected by development. However, strict equivalency requirements may complicate trading of offset credits, increase costs due to restricted offset placement options, and force offset activities to focus on features that may not represent regional conservation priorities. Using the oil sands industry of Alberta, Canada, as a case study, we evaluated the economic and ecological performance of alternative offset systems targeting either ecologically equivalent areas (vegetation types) or regional conservation priorities (caribou and the Dry Mixedwood natural subregion). Exchanging dissimilar biodiversity elements requires assessment via a generalized metric; we used an empirically derived index of biodiversity intactness to link offsets with losses incurred by development. We considered 2 offset activities: land protection, with costs estimated as the net present value of profits of petroleum and timber resources to be paid as compensation to resource tenure holders, and restoration of anthropogenic footprint, with costs estimated from existing restoration projects. We used the spatial optimization tool MARXAN to develop hypothetical offset networks that met either the equivalent-vegetation or conservation-priority targets. Networks that required offsetting equivalent vegetation cost 2-17 times more than priority-focused networks. This finding calls into question the prudence of equivalency-based systems, particularly in relatively undeveloped jurisdictions, where conservation focuses on limiting and directing future losses. Priority-focused offsets may offer benefits to industry and environmental stakeholders by allowing for lower-cost conservation of valued ecological features and may invite discussion on what land-use trade-offs are acceptable when trading biodiversity via offsets. Resultados Económicos y Ecológicos de Sistemas de Compensación de Biodiversidad Flexible Habib et al. © 2013 Society for Conservation Biology.
Equivalent statistics and data interpretation.
Francis, Gregory
2017-08-01
Recent reform efforts in psychological science have led to a plethora of choices for scientists to analyze their data. A scientist making an inference about their data must now decide whether to report a p value, summarize the data with a standardized effect size and its confidence interval, report a Bayes Factor, or use other model comparison methods. To make good choices among these options, it is necessary for researchers to understand the characteristics of the various statistics used by the different analysis frameworks. Toward that end, this paper makes two contributions. First, it shows that for the case of a two-sample t test with known sample sizes, many different summary statistics are mathematically equivalent in the sense that they are based on the very same information in the data set. When the sample sizes are known, the p value provides as much information about a data set as the confidence interval of Cohen's d or a JZS Bayes factor. Second, this equivalence means that different analysis methods differ only in their interpretation of the empirical data. At first glance, it might seem that mathematical equivalence of the statistics suggests that it does not matter much which statistic is reported, but the opposite is true because the appropriateness of a reported statistic is relative to the inference it promotes. Accordingly, scientists should choose an analysis method appropriate for their scientific investigation. A direct comparison of the different inferential frameworks provides some guidance for scientists to make good choices and improve scientific practice.
Structural confounding of area-level deprivation and segreation: an empirical example
The neighborhood effects literature has grown, but its utility is limited by the lack of attention paid to non-random selection into neighborhoods. Confounding occurs when an exposure and an outcome share an underlying common cause. Confounding resulting from differential allocat...
Huprich, Steven K; Defife, Jared; Westen, Drew
2014-01-01
We sought to determine whether meaningful subtypes of Dysthymic patients could be identified when grouping them by similar personality profiles. A random, national sample of psychiatrists and clinical psychologists (n=1201) described a randomly selected current patient with personality pathology using the descriptors in the Shedler-Westen Assessment Procedure-II (SWAP-II), completed assessments of patients' adaptive functioning, and provided DSM-IV Axis I and II diagnoses. We applied Q-factor cluster analyses to those patients diagnosed with Dysthymic Disorder. Four clusters were identified-High Functioning, Anxious/Dysphoric, Emotionally Dysregulated, and Narcissistic. These factor scores corresponded with a priori hypotheses regarding diagnostic comorbidity and level of adaptive functioning. We compared these groups to diagnostic constructs described and empirically identified in the past literature. The results converge with past and current ideas about the ways in which chronic depression and personality are related and offer an enhanced means by which to understand a heterogeneous diagnostic category that is empirically grounded and clinically useful. © 2013 Published by Elsevier B.V.
Zipf's law from scale-free geometry.
Lin, Henry W; Loeb, Abraham
2016-03-01
The spatial distribution of people exhibits clustering across a wide range of scales, from household (∼10(-2) km) to continental (∼10(4) km) scales. Empirical data indicate simple power-law scalings for the size distribution of cities (known as Zipf's law) and the population density fluctuations as a function of scale. Using techniques from random field theory and statistical physics, we show that these power laws are fundamentally a consequence of the scale-free spatial clustering of human populations and the fact that humans inhabit a two-dimensional surface. In this sense, the symmetries of scale invariance in two spatial dimensions are intimately connected to urban sociology. We test our theory by empirically measuring the power spectrum of population density fluctuations and show that the logarithmic slope α=2.04 ± 0.09, in excellent agreement with our theoretical prediction α=2. The model enables the analytic computation of many new predictions by importing the mathematical formalism of random fields.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.
Nonlinear random response prediction using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.
1993-01-01
An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Moses; Gilson, Erik P.; Davidson, Ronald C.
2009-04-10
A random noise-induced beam degradation that can affect intense beam transport over long propagation distances has been experimentally studied by making use of the transverse beam dynamics equivalence between an alternating-gradient (AG) focusing system and a linear Paul trap system. For the present studies, machine imperfections in the quadrupole focusing lattice are considered, which are emulated by adding small random noise on the voltage waveform of the quadrupole electrodes in the Paul trap. It is observed that externally driven noise continuously produces a nonthermal tail of trapped ions, and increases the transverse emittance almost linearly with the duration of themore » noise.« less
Unreliable Retrial Queues in a Random Environment
2007-09-01
equivalent to the stochasticity of the matrix Ĝ. It is generally known from Perron - Frobenius theory that a given square ma- trix M is stochastic if and...only if its maximum positive eigenvalue (i.e., its Perron eigenvalue) sp(M) is equal to unity. A simple analytical condition that guarantees the
40 CFR 86.1823-08 - Durability demonstration procedures for exhaust emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... delivers the appropriate exhaust flow, exhaust constituents, and exhaust temperature to the face of the... vehicles. (2) This data set must consist of randomly procured vehicles from actual customer use. The... equivalency factor. (C) The manufacturer must submit an analysis which evaluates whether the durability...
ERIC Educational Resources Information Center
Zelman, Diane C.; And Others
1992-01-01
Randomly assigned smokers (n=126) to six-session smoking cessation treatments consisting of skills training or support counseling strategies and nicotine gum or rapid smoking nicotine exposure strategies. Counseling and nicotine strategies were completely crossed; all four combinations resulted in equivalent one-year abstinence rates. Treatments…
ERIC Educational Resources Information Center
Shelton, John L.; Madrazo-Peterson, Rita
1978-01-01
Anxious students were randomly assigned to a wait-list control group; to three groups aided by experienced behavior therapists; or to three groups led by paraprofessionals. Results show paraprofessionals can achieve outcome and maintenance effects equivalent to more rigorously trained professionals. Paraprofessionals can conduct desensitization in…
Miyao, Kotaro; Sawa, Masashi; Kurata, Mio; Suzuki, Ritsuro; Sakemura, Reona; Sakai, Toshiyasu; Kato, Tomonori; Sahashi, Satomi; Tsushita, Natsuko; Ozawa, Yukiyasu; Tsuzuki, Motohiro; Kohno, Akio; Adachi, Tatsuya; Watanabe, Keisuke; Ohbayashi, Kaneyuki; Inagaki, Yuichiro; Atsuta, Yoshiko; Emi, Nobuhiko
2017-01-01
Invasive fungal infection (IFI) is a major life-threatening problem encountered by patients with hematological malignancies receiving intensive chemotherapy. Empirical antifungal agents are therefore important. Despite the availability of antifungal agents for such situations, the optimal agents and administration methods remain unclear. We conducted a prospective phase 2 study of empirical 1 mg/kg/day liposomal amphotericin B (L-AMB) in 80 patients receiving intensive chemotherapy for hematological malignancies. All enrolled patients were high-risk and had recurrent prolonged febrile neutropenia despite having received broad-spectrum antibacterial therapy for at least 72 hours. Fifty-three patients (66.3 %) achieved the primary endpoint of successful treatment, thus exceeding the predefined threshold success rate. No patients developed IFI. The treatment completion rate was 73.8 %, and only two cases ceased treatment because of adverse events. The most frequent events were reversible electrolyte abnormalities. We consider low-dose L-AMB to provide comparable efficacy and improved safety and cost-effectiveness when compared with other empirical antifungal therapies. Additional large-scale randomized studies are needed to determine the clinical usefulness of L-AMB relative to other empirical antifungal therapies.
Equivalent Circuit for Magnetoelectric Read and Write Operations
NASA Astrophysics Data System (ADS)
Camsari, Kerem Y.; Faria, Rafatul; Hassan, Orchi; Sutton, Brian M.; Datta, Supriyo
2018-04-01
We describe an equivalent circuit model applicable to a wide variety of magnetoelectric phenomena and use spice simulations to benchmark this model against experimental data. We use this model to suggest a different mode of operation where the 1 and 0 states are represented not by states with net magnetization (like mx , my, or mz) but by different easy axes, quantitatively described by (mx2-my2), which switches from 0 to 1 through the write voltage. This change is directly detected as a read signal through the inverse effect. The use of (mx2-my2) to represent a bit is a radical departure from the standard convention of using the magnetization (m ) to represent information. We then show how the equivalent circuit can be used to build a device exhibiting tunable randomness and suggest possibilities for extending it to nonvolatile memory with read and write capabilities, without the use of external magnetic fields or magnetic tunnel junctions.
Constitutive Modeling of Nanotube/Polymer Composites with Various Nanotube Orientations
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Thomas S.
2002-01-01
In this study, a technique has been proposed for developing constitutive models for polymer composite systems reinforced with single-walled carbon nanotubes (SWNT) with various orientations with respect to the bulk material coordinates. A nanotube, the local polymer adjacent to the nanotube, and the nanotube/polymer interface have been modeled as an equivalent-continuum fiber by using an equivalent-continuum modeling method. The equivalent-continuum fiber accounts for the local molecular structure and bonding information and serves as a means for incorporating micromechanical analyses for the prediction of bulk mechanical properties of SWNT/polymer composite. As an example, the proposed approach is used for the constitutive modeling of a SWNT/LaRC-SI (with a PmPV interface) composite system, with aligned nanotubes, three-dimensionally randomly oriented nanotubes, and nanotubes oriented with varying degrees of axisymmetry. It is shown that the Young s modulus is highly dependent on the SWNT orientation distribution.
Snow parameters from Nimbus-6 electrically scanned microwave radiometer. [(ESMR-6)
NASA Technical Reports Server (NTRS)
Abrams, G.; Edgerton, A. T.
1977-01-01
Two sites in Canada were selected for detailed analysis of the ESMR-6/ snow relationships. Data were analyzed for February 1976 for site 1 and January, February and March 1976 for site 2. Snowpack water equivalents were less than 4.5 inches for site 1 and, depending on the month, were between 2.9 and 14.5 inches for site 2. A statistically significant relationship was found between ESMR-6 measurements and snowpack water equivalents for the Site 2 February and March data. Associated analysis findings presented are the effects of random measurement errors, snow site physiolography, and weather conditions on the ESMR-6/snow relationship.
Mechanical equivalent of quantum heat engines.
Arnaud, Jacques; Chusseau, Laurent; Philippe, Fabrice
2008-06-01
Quantum heat engines employ as working agents multilevel systems instead of classical gases. We show that under some conditions quantum heat engines are equivalent to a series of reservoirs at different altitudes containing balls of various weights. A cycle consists of picking up at random a ball from one reservoir and carrying it to the next, thereby performing or absorbing some work. In particular, quantum heat engines, employing two-level atoms as working agents, are modeled by reservoirs containing balls of weight 0 or 1. The mechanical model helps us prove that the maximum efficiency of quantum heat engines is the Carnot efficiency. Heat pumps and negative temperatures are considered.
Renault, Nisa K E; Pritchett, Sonja M; Howell, Robin E; Greer, Wenda L; Sapienza, Carmen; Ørstavik, Karen Helene; Hamilton, David C
2013-01-01
In eutherian mammals, one X-chromosome in every XX somatic cell is transcriptionally silenced through the process of X-chromosome inactivation (XCI). Females are thus functional mosaics, where some cells express genes from the paternal X, and the others from the maternal X. The relative abundance of the two cell populations (X-inactivation pattern, XIP) can have significant medical implications for some females. In mice, the ‘choice' of which X to inactivate, maternal or paternal, in each cell of the early embryo is genetically influenced. In humans, the timing of XCI choice and whether choice occurs completely randomly or under a genetic influence is debated. Here, we explore these questions by analysing the distribution of XIPs in large populations of normal females. Models were generated to predict XIP distributions resulting from completely random or genetically influenced choice. Each model describes the discrete primary distribution at the onset of XCI, and the continuous secondary distribution accounting for changes to the XIP as a result of development and ageing. Statistical methods are used to compare models with empirical data from Danish and Utah populations. A rigorous data treatment strategy maximises information content and allows for unbiased use of unphased XIP data. The Anderson–Darling goodness-of-fit statistics and likelihood ratio tests indicate that a model of genetically influenced XCI choice better fits the empirical data than models of completely random choice. PMID:23652377
Stochastic tools hidden behind the empirical dielectric relaxation laws
NASA Astrophysics Data System (ADS)
Stanislavsky, Aleksander; Weron, Karina
2017-03-01
The paper is devoted to recent advances in stochastic modeling of anomalous kinetic processes observed in dielectric materials which are prominent examples of disordered (complex) systems. Theoretical studies of dynamical properties of ‘structures with variations’ (Goldenfield and Kadanoff 1999 Science 284 87-9) require application of such mathematical tools—by means of which their random nature can be analyzed and, independently of the details distinguishing various systems (dipolar materials, glasses, semiconductors, liquid crystals, polymers, etc), the empirical universal kinetic patterns can be derived. We begin with a brief survey of the historical background of the dielectric relaxation study. After a short outline of the theoretical ideas providing the random tools applicable to modeling of relaxation phenomena, we present probabilistic implications for the study of the relaxation-rate distribution models. In the framework of the probability distribution of relaxation rates we consider description of complex systems, in which relaxing entities form random clusters interacting with each other and single entities. Then we focus on stochastic mechanisms of the relaxation phenomenon. We discuss the diffusion approach and its usefulness for understanding of anomalous dynamics of relaxing systems. We also discuss extensions of the diffusive approach to systems under tempered random processes. Useful relationships among different stochastic approaches to the anomalous dynamics of complex systems allow us to get a fresh look at this subject. The paper closes with a final discussion on achievements of stochastic tools describing the anomalous time evolution of complex systems.
In Search of Meaning: Are School Rampage Shootings Random and Senseless Violence?
Madfis, Eric
2017-01-02
This article discusses Joel Best's ( 1999 ) notion of random violence and applies his concepts of pointlessness, patternlessness, and deterioration to the reality about multiple-victim school shootings gleaned from empirical research about the phenomenon. Best describes how violence is rarely random, as scholarship reveals myriad observable patterns, lots of discernable motives and causes, and often far too much fear-mongering over how bad society is getting and how violent we are becoming. In contrast, it is vital that the media, scholars, and the public better understand crime patterns, criminal motivations, and the causes of fluctuating crime rates. As an effort toward such progress, this article reviews the academic literature on school rampage shootings and explores the extent to which these attacks are and are not random acts of violence.
Stadler, Tanja; Degnan, James H.; Rosenberg, Noah A.
2016-01-01
Classic null models for speciation and extinction give rise to phylogenies that differ in distribution from empirical phylogenies. In particular, empirical phylogenies are less balanced and have branching times closer to the root compared to phylogenies predicted by common null models. This difference might be due to null models of the speciation and extinction process being too simplistic, or due to the empirical datasets not being representative of random phylogenies. A third possibility arises because phylogenetic reconstruction methods often infer gene trees rather than species trees, producing an incongruity between models that predict species tree patterns and empirical analyses that consider gene trees. We investigate the extent to which the difference between gene trees and species trees under a combined birth–death and multispecies coalescent model can explain the difference in empirical trees and birth–death species trees. We simulate gene trees embedded in simulated species trees and investigate their difference with respect to tree balance and branching times. We observe that the gene trees are less balanced and typically have branching times closer to the root than the species trees. Empirical trees from TreeBase are also less balanced than our simulated species trees, and model gene trees can explain an imbalance increase of up to 8% compared to species trees. However, we see a much larger imbalance increase in empirical trees, about 100%, meaning that additional features must also be causing imbalance in empirical trees. This simulation study highlights the necessity of revisiting the assumptions made in phylogenetic analyses, as these assumptions, such as equating the gene tree with the species tree, might lead to a biased conclusion. PMID:26968785
Dialectical Behavior Therapy for Adolescents: Theory, Treatment Adaptations, and Empirical Outcomes
ERIC Educational Resources Information Center
MacPherson, Heather A.; Cheavens, Jennifer S.; Fristad, Mary A.
2013-01-01
Dialectical behavior therapy (DBT) was originally developed for chronically suicidal adults with borderline personality disorder (BPD) and emotion dysregulation. Randomized controlled trials (RCTs) indicate DBT is associated with improvements in problem behaviors, including suicide ideation and behavior, non-suicidal self-injury (NSSI), attrition,…
Randomized Controlled Trial of a Brief Problem-Orientation Intervention for Suicidal Ideation
ERIC Educational Resources Information Center
Fitzpatrick, Kathleen Kara; Witte, Tracy K.; Schmidt, Norman B.
2005-01-01
Empirical evaluations suggest that problem orientation, the initial reaction to problems, differentiates suicidal youth from nonclinical controls and nonideating psychiatric controls. One promising area for intervention with suicidal youth relates to enhancing this specific coping skill. Nonclinical participants (N = 110) with active suicidal…
Anxiety sensitivity risk reduction in smokers: A randomized control trial examining effects on panic
Schmidt, Norman B.; Raines, Amanda M.; Allan, Nicholas P.; Zvolensky, Michael J.
2016-01-01
Empirical evidence has identified several risk factors for panic psychopathology, including smoking and anxiety sensitivity (AS; the fear of anxiety-related sensations). Smokers with elevated AS are therefore a particularly vulnerable population for panic. Yet, there is little knowledge about how to reduce risk of panic among high AS smokers. The present study prospectively evaluated panic outcomes within the context of a controlled randomized risk reduction program for smokers. Participants (N = 526) included current smokers who all received a state-of-the-art smoking cessation intervention with approximately half randomized to the AS reduction intervention termed Panic-smoking Program (PSP). The primary hypotheses focus on examining the effects of a PSP on panic symptoms in the context of this vulnerable population. Consistent with prediction, there was a significant effect of treatment condition on AS, such that individuals in the PSP condition, compared to those in the control condition, demonstrated greater decreases in AS throughout treatment and the follow-up period. In addition, PSP treatment resulted in lower rates of panic-related symptomatology. Moreover, mediation analyses indicated that reductions in AS resulted in lower panic symptoms. The present study provides the first empirical evidence that brief, targeted psychoeducational interventions can mitigate panic risk among smokers. PMID:26752327
Schmidt, Norman B; Raines, Amanda M; Allan, Nicholas P; Zvolensky, Michael J
2016-02-01
Empirical evidence has identified several risk factors for panic psychopathology, including smoking and anxiety sensitivity (AS; the fear of anxiety-related sensations). Smokers with elevated AS are therefore a particularly vulnerable population for panic. Yet, there is little knowledge about how to reduce risk of panic among high AS smokers. The present study prospectively evaluated panic outcomes within the context of a controlled randomized risk reduction program for smokers. Participants (N = 526) included current smokers who all received a state-of-the-art smoking cessation intervention with approximately half randomized to the AS reduction intervention termed Panic-smoking Program (PSP). The primary hypotheses focus on examining the effects of a PSP on panic symptoms in the context of this vulnerable population. Consistent with prediction, there was a significant effect of treatment condition on AS, such that individuals in the PSP condition, compared to those in the control condition, demonstrated greater decreases in AS throughout treatment and the follow-up period. In addition, PSP treatment resulted in lower rates of panic-related symptomatology. Moreover, mediation analyses indicated that reductions in AS resulted in lower panic symptoms. The present study provides the first empirical evidence that brief, targeted psychoeducational interventions can mitigate panic risk among smokers. Copyright © 2015 Elsevier Ltd. All rights reserved.
Smith, Jennifer L; Sturrock, Hugh J W; Olives, Casey; Solomon, Anthony W; Brooker, Simon J
2013-01-01
Implementation of trachoma control strategies requires reliable district-level estimates of trachomatous inflammation-follicular (TF), generally collected using the recommended gold-standard cluster randomized surveys (CRS). Integrated Threshold Mapping (ITM) has been proposed as an integrated and cost-effective means of rapidly surveying trachoma in order to classify districts according to treatment thresholds. ITM differs from CRS in a number of important ways, including the use of a school-based sampling platform for children aged 1-9 and a different age distribution of participants. This study uses computerised sampling simulations to compare the performance of these survey designs and evaluate the impact of varying key parameters. Realistic pseudo gold standard data for 100 districts were generated that maintained the relative risk of disease between important sub-groups and incorporated empirical estimates of disease clustering at the household, village and district level. To simulate the different sampling approaches, 20 clusters were selected from each district, with individuals sampled according to the protocol for ITM and CRS. Results showed that ITM generally under-estimated the true prevalence of TF over a range of epidemiological settings and introduced more district misclassification according to treatment thresholds than did CRS. However, the extent of underestimation and resulting misclassification was found to be dependent on three main factors: (i) the district prevalence of TF; (ii) the relative risk of TF between enrolled and non-enrolled children within clusters; and (iii) the enrollment rate in schools. Although in some contexts the two methodologies may be equivalent, ITM can introduce a bias-dependent shift as prevalence of TF increases, resulting in a greater risk of misclassification around treatment thresholds. In addition to strengthening the evidence base around choice of trachoma survey methodologies, this study illustrates the use of a simulated approach in addressing operational research questions for trachoma but also other NTDs.
Lund, S S; Tarnow, L; Stehouwer, C D A; Schalkwijk, C G; Frandsen, M; Smidt, U M; Pedersen, O; Parving, H-H; Vaag, A
2007-05-01
Metformin is the 'drug-of-first-choice' in obese patients with type 2 diabetes mellitus (T2DM) due to its antihyperglycaemic and cardiovascular protective potentials. In non-obese patients with T2DM, insulin secretagogues are empirically used as first choice. In this investigator-initiated trial, we evaluated the effect of metformin vs. an insulin secretagogue, repaglinide on glycaemic regulation and markers of inflammation and insulin sensitivity in non-obese patients with T2DM. A single-centre, double-masked, double-dummy, crossover study during 2 x 4 months involved 96 non-obese (body mass index < or = 27 kg/m(2)) insulin-naïve patients with T2DM. At enrolment, previous oral hypoglycaemic agents (OHA) were stopped and patients entered a 1-month run-in on diet-only treatment. Hereafter, patients were randomized to either repaglinide 2 mg thrice daily followed by metformin 1 g twice daily or vice versa each during 4 months with 1-month washout between interventions. End-of-treatment levels of haemoglobin A(1c) (HbA(1c)), fasting plasma glucose, mean of seven-point home-monitored plasma glucose and fasting levels of high-sensitivity C-reactive protein and adiponectin were not significantly different between treatments. However, body weight, waist circumference, fasting serum levels of insulin and C-peptide were lower and less number of patients experienced hypoglycaemia during treatment with metformin vs. repaglinide. Both drugs were well tolerated. In non-obese patients with T2DM, overall glycaemic regulation was equivalent with less hypoglycaemia during metformin vs. repaglinide treatment for 2 x 4 months. Metformin was more effective targeting non-glycaemic cardiovascular risk markers related to total and abdominal body fat stores as well as fasting insulinaemia. These findings may suggest the use of metformin as the preferred OHA also in non-obese patients with T2DM.
Partial transpose of random quantum states: Exact formulas and meanders
NASA Astrophysics Data System (ADS)
Fukuda, Motohisa; Śniady, Piotr
2013-04-01
We investigate the asymptotic behavior of the empirical eigenvalues distribution of the partial transpose of a random quantum state. The limiting distribution was previously investigated via Wishart random matrices indirectly (by approximating the matrix of trace 1 by the Wishart matrix of random trace) and shown to be the semicircular distribution or the free difference of two free Poisson distributions, depending on how dimensions of the concerned spaces grow. Our use of Wishart matrices gives exact combinatorial formulas for the moments of the partial transpose of the random state. We find three natural asymptotic regimes in terms of geodesics on the permutation groups. Two of them correspond to the above two cases; the third one turns out to be a new matrix model for the meander polynomials. Moreover, we prove the convergence to the semicircular distribution together with its extreme eigenvalues under weaker assumptions, and show large deviation bound for the latter.
Sjövall, Fredrik; Perner, Anders; Hylander Møller, Morten
2017-04-01
To assess benefits and harms of empirical mono- vs. combination antibiotic therapy in adult patients with severe sepsis in the intensive care unit (ICU). We performed a systematic review according to the Cochrane Collaboration methodology, including meta-analysis, risk of bias assessment and trial sequential analysis (TSA). We included randomised clinical trials (RCT) assessing empirical mono-antibiotic therapy versus a combination of two or more antibiotics in adult ICU patients with severe sepsis. We exclusively assessed patient-important outcomes, including mortality. Two reviewers independently evaluated studies for inclusion, extracted data, and assessed risk of bias. Risk ratios (RRs) with 95% confidence intervals (CIs) were estimated and the risk of random errors was assessed by TSA. Thirteen RCTs (n = 2633) were included; all were judged as having high risk of bias. Carbapenems were the most frequently used mono-antibiotic (8 of 13 trials). There was no difference in mortality (RR 1.11, 95% CI 0.95-1.29; p = 0.19) or in any other patient-important outcomes between mono- vs. combination therapy. In TSA of mortality, the Z-curve reached the futility area, indicating that a 20% relative risk difference in mortality may be excluded between the two groups. For the other outcomes, TSA indicated lack of data and high risk of random errors. This systematic review of RCTs with meta-analysis and TSA demonstrated no differences in mortality or other patient-important outcomes between empirical mono- vs. combination antibiotic therapy in adult ICU patients with severe sepsis. The quantity and quality of data was low without firm evidence for benefit or harm of combination therapy. Copyright © 2016 The British Infection Association. Published by Elsevier Ltd. All rights reserved.
Pitch chroma discrimination, generalization, and transfer tests of octave equivalence in humans.
Hoeschele, Marisa; Weisman, Ronald G; Sturdy, Christopher B
2012-11-01
Octave equivalence occurs when notes separated by an octave (a doubling in frequency) are judged as being perceptually similar. Considerable evidence points to the importance of the octave in music and speech. Yet, experimental demonstration of octave equivalence has been problematic. Using go/no-go operant discrimination and generalization, we studied octave equivalence in humans. In Experiment 1, we found that a procedure that failed to show octave equivalence in European starlings also failed in humans. In Experiment 2, we modified the procedure to control for the effects of pitch height perception by training participants in Octave 4 and testing in Octave 5. We found that the pattern of responding developed by discrimination training in Octave 4 generalized to Octave 5. We replicated and extended our findings in Experiment 3 by adding a transfer phase: Participants were trained with either the same or a reversed pattern of rewards in Octave 5. Participants transferred easily to the same pattern of reward in Octave 5 but struggled to learn the reversed pattern. We provided minimal instruction, presented no ordered sequences of notes, and used only sine-wave tones, but participants nonetheless constructed pitch chroma information from randomly ordered sequences of notes. Training in music weakly hindered octave generalization but moderately facilitated both positive and negative transfer.
Reitsma, Angela; Chu, Rong; Thorpe, Julia; McDonald, Sarah; Thabane, Lehana; Hutton, Eileen
2014-09-26
Clustering of outcomes at centers involved in multicenter trials is a type of center effect. The Consolidated Standards of Reporting Trials Statement recommends that multicenter randomized controlled trials (RCTs) should account for center effects in their analysis, however most do not. The Early External Cephalic Version (EECV) trials published in 2003 and 2011 stratified by center at randomization, but did not account for center in the analyses, and due to the nature of the intervention and number of centers, may have been prone to center effects. Using data from the EECV trials, we undertook an empirical study to compare various statistical approaches to account for center effect while estimating the impact of external cephalic version timing (early or delayed) on the outcomes of cesarean section, preterm birth, and non-cephalic presentation at the time of birth. The data from the EECV pilot trial and the EECV2 trial were merged into one dataset. Fisher's exact method was used to test the overall effect of external cephalic version timing unadjusted for center effects. Seven statistical models that accounted for center effects were applied to the data. The models included: i) the Mantel-Haenszel test, ii) logistic regression with fixed center effect and fixed treatment effect, iii) center-size weighted and iv) un-weighted logistic regression with fixed center effect and fixed treatment-by-center interaction, iv) logistic regression with random center effect and fixed treatment effect, v) logistic regression with random center effect and random treatment-by-center interaction, and vi) generalized estimating equations. For each of the three outcomes of interest approaches to account for center effect did not alter the overall findings of the trial. The results were similar for the majority of the methods used to adjust for center, illustrating the robustness of the findings. Despite literature that suggests center effect can change the estimate of effect in multicenter trials, this empirical study does not show a difference in the outcomes of the EECV trials when accounting for center effect. The EECV2 trial was registered on 30 July 30 2005 with Current Controlled Trials: ISRCTN 56498577.
Borbolla, J R; López-Hernández, M A; González-Avante, M; DeDiego, J; Trueba, E; Alvarado, M L; Jiménez, R M
2001-01-01
High-intensity regimes of chemotherapy have led to longer and more severe episodes of neutropenia with a resulting increase in morbidity and mortality due to infections. Which empiric antibiotic regimen to use in these cases is still under debate. We performed a randomized comparative study to evaluate the efficacy of cefepime versus ceftriaxone plus amikacin as the initial treatment in an escalating, empirical, antibiotic therapy regimen in febrile neutropenic patients. Both adults and children were included. All patients had less than 500 neutrophils/microl at the time of infection. Patients were randomized to receive either cefepime or ceftriaxone plus amikacin. If infection continued 72 h later, patients in both groups received vancomycin, and if infection had not disappeared 7 days after starting antibiotics, amphotericin B was started. Twenty patients were included in each group. Both treatment and control groups were comparable for age and sex, among other factors. There were 18 cures in the cefepime group and 17 in the ceftriaxone plus amikacin group (p = 0.9). No patient discontinued therapy because of toxicity. Cefepime is a safe and very effective therapy for patients with acute leukemia and febrile neutropenia; in addition, it is a cheaper regimen in our country, and lacks the potential toxicity of the aminoglycosides. Copyright 2001 S. Karger AG, Basel
Westine, Carl D; Spybrook, Jessaca; Taylor, Joseph A
2013-12-01
Prior research has focused primarily on empirically estimating design parameters for cluster-randomized trials (CRTs) of mathematics and reading achievement. Little is known about how design parameters compare across other educational outcomes. This article presents empirical estimates of design parameters that can be used to appropriately power CRTs in science education and compares them to estimates using mathematics and reading. Estimates of intraclass correlations (ICCs) are computed for unconditional two-level (students in schools) and three-level (students in schools in districts) hierarchical linear models of science achievement. Relevant student- and school-level pretest and demographic covariates are then considered, and estimates of variance explained are computed. Subjects: Five consecutive years of Texas student-level data for Grades 5, 8, 10, and 11. Science, mathematics, and reading achievement raw scores as measured by the Texas Assessment of Knowledge and Skills. Results: Findings show that ICCs in science range from .172 to .196 across grades and are generally higher than comparable statistics in mathematics, .163-.172, and reading, .099-.156. When available, a 1-year lagged student-level science pretest explains the most variability in the outcome. The 1-year lagged school-level science pretest is the best alternative in the absence of a 1-year lagged student-level science pretest. Science educational researchers should utilize design parameters derived from science achievement outcomes. © The Author(s) 2014.
Probabilistic empirical prediction of seasonal climate: evaluation and potential applications
NASA Astrophysics Data System (ADS)
Dieppois, B.; Eden, J.; van Oldenborgh, G. J.
2017-12-01
Preparing for episodes with risks of anomalous weather a month to a year ahead is an important challenge for governments, non-governmental organisations, and private companies and is dependent on the availability of reliable forecasts. The majority of operational seasonal forecasts are made using process-based dynamical models, which are complex, computationally challenging and prone to biases. Empirical forecast approaches built on statistical models to represent physical processes offer an alternative to dynamical systems and can provide either a benchmark for comparison or independent supplementary forecasts. Here, we present a new evaluation of an established empirical system used to predict seasonal climate across the globe. Forecasts for surface air temperature, precipitation and sea level pressure are produced by the KNMI Probabilistic Empirical Prediction (K-PREP) system every month and disseminated via the KNMI Climate Explorer (climexp.knmi.nl). K-PREP is based on multiple linear regression and built on physical principles to the fullest extent with predictive information taken from the global CO2-equivalent concentration, large-scale modes of variability in the climate system and regional-scale information. K-PREP seasonal forecasts for the period 1981-2016 will be compared with corresponding dynamically generated forecasts produced by operational forecast systems. While there are many regions of the world where empirical forecast skill is extremely limited, several areas are identified where K-PREP offers comparable skill to dynamical systems. We discuss two key points in the future development and application of the K-PREP system: (a) the potential for K-PREP to provide a more useful basis for reference forecasts than those based on persistence or climatology, and (b) the added value of including K-PREP forecast information in multi-model forecast products, at least for known regions of good skill. We also discuss the potential development of stakeholder-driven applications of the K-PREP system, including empirical forecasts for circumboreal fire activity.
efficient association study design via power-optimized tag SNP selection
HAN, BUHM; KANG, HYUN MIN; SEO, MYEONG SEONG; ZAITLEN, NOAH; ESKIN, ELEAZAR
2008-01-01
Discovering statistical correlation between causal genetic variation and clinical traits through association studies is an important method for identifying the genetic basis of human diseases. Since fully resequencing a cohort is prohibitively costly, genetic association studies take advantage of local correlation structure (or linkage disequilibrium) between single nucleotide polymorphisms (SNPs) by selecting a subset of SNPs to be genotyped (tag SNPs). While many current association studies are performed using commercially available high-throughput genotyping products that define a set of tag SNPs, choosing tag SNPs remains an important problem for both custom follow-up studies as well as designing the high-throughput genotyping products themselves. The most widely used tag SNP selection method optimizes over the correlation between SNPs (r2). However, tag SNPs chosen based on an r2 criterion do not necessarily maximize the statistical power of an association study. We propose a study design framework that chooses SNPs to maximize power and efficiently measures the power through empirical simulation. Empirical results based on the HapMap data show that our method gains considerable power over a widely used r2-based method, or equivalently reduces the number of tag SNPs required to attain the desired power of a study. Our power-optimized 100k whole genome tag set provides equivalent power to the Affymetrix 500k chip for the CEU population. For the design of custom follow-up studies, our method provides up to twice the power increase using the same number of tag SNPs as r2-based methods. Our method is publicly available via web server at http://design.cs.ucla.edu. PMID:18702637
Dynamic aspects of soil water availability for isohydric plants: Focus on root hydraulic resistances
NASA Astrophysics Data System (ADS)
Couvreur, V.; Vanderborght, J.; Draye, X.; Javaux, M.
2014-11-01
Soil water availability for plant transpiration is a key concept in agronomy. The objective of this study is to revisit this concept and discuss how it may be affected by processes locally influencing root hydraulic properties. A physical limitation to soil water availability in terms of maximal flow rate available to plant leaves (Qavail) is defined. It is expressed for isohydric plants, in terms of plant-centered variables and properties (the equivalent soil water potential sensed by the plant, ψs eq; the root system equivalent conductance, Krs; and a threshold leaf water potential, ψleaf lim). The resulting limitation to plant transpiration is compared to commonly used empirical stress functions. Similarities suggest that the slope of empirical functions might correspond to the ratio of Krs to the plant potential transpiration rate. The sensitivity of Qavail to local changes of root hydraulic conductances in response to soil matric potential is investigated using model simulations. A decrease of radial conductances when the soil dries induces earlier water stress, but allows maintaining higher night plant water potentials and higher Qavail during the last week of a simulated 1 month drought. In opposition, an increase of radial conductances during soil drying provokes an increase of hydraulic redistribution and Qavail at short term. This study offers a first insight on the effect of dynamic local root hydraulic properties on soil water availability. By better understanding complex interactions between hydraulic processes involved in soil-plant hydrodynamics, better prospects on how root hydraulic traits mitigate plant water stress might be achieved.
Perner, Josef; Leekam, Susan
2008-01-01
We resume an exchange of ideas with Uta Frith that started before the turn of the century. The curious incident responsible for this exchange was the finding that children with autism fail tests of false belief, while they pass Zaitchik's (1990) photograph task (Leekam & Perner, 1991). This finding led to the conclusion that children with autism have a domain-specific impairment in Theory of Mind (mental representations), because the photograph task and the false-belief task are structurally equivalent except for the nonmental character of photographs. In this paper we argue that the false-belief task and the false-photograph task are not structurally equivalent and are not empirically associated. Instead a truly structurally equivalent task is the false-sign task. Performance on this task is strongly associated with the false-belief task. A version of this task, the misleading-signal task, also poses severe problems for children with autism (Bowler, Briskman, Gurvidi, & Fornells-Ambrojo, 2005). These new findings therefore challenge the earlier interpretation of a domain-specific difficulty in inferring mental states and suggest that children with autism also have difficulty understanding misleading nonmental objects. Brain imaging data using false-belief, "false"-photo, and false-sign scenarios provide further supporting evidence for our conclusions.
Williamson, Ross S.; Sahani, Maneesh; Pillow, Jonathan W.
2015-01-01
Stimulus dimensionality-reduction methods in neuroscience seek to identify a low-dimensional space of stimulus features that affect a neuron’s probability of spiking. One popular method, known as maximally informative dimensions (MID), uses an information-theoretic quantity known as “single-spike information” to identify this space. Here we examine MID from a model-based perspective. We show that MID is a maximum-likelihood estimator for the parameters of a linear-nonlinear-Poisson (LNP) model, and that the empirical single-spike information corresponds to the normalized log-likelihood under a Poisson model. This equivalence implies that MID does not necessarily find maximally informative stimulus dimensions when spiking is not well described as Poisson. We provide several examples to illustrate this shortcoming, and derive a lower bound on the information lost when spiking is Bernoulli in discrete time bins. To overcome this limitation, we introduce model-based dimensionality reduction methods for neurons with non-Poisson firing statistics, and show that they can be framed equivalently in likelihood-based or information-theoretic terms. Finally, we show how to overcome practical limitations on the number of stimulus dimensions that MID can estimate by constraining the form of the non-parametric nonlinearity in an LNP model. We illustrate these methods with simulations and data from primate visual cortex. PMID:25831448
NASA Astrophysics Data System (ADS)
Zhang, Rui; Newhauser, Wayne D.
2009-03-01
In proton therapy, the radiological thickness of a material is commonly expressed in terms of water equivalent thickness (WET) or water equivalent ratio (WER). However, the WET calculations required either iterative numerical methods or approximate methods of unknown accuracy. The objective of this study was to develop a simple deterministic formula to calculate WET values with an accuracy of 1 mm for materials commonly used in proton radiation therapy. Several alternative formulas were derived in which the energy loss was calculated based on the Bragg-Kleeman rule (BK), the Bethe-Bloch equation (BB) or an empirical version of the Bethe-Bloch equation (EBB). Alternative approaches were developed for targets that were 'radiologically thin' or 'thick'. The accuracy of these methods was assessed by comparison to values from an iterative numerical method that utilized evaluated stopping power tables. In addition, we also tested the approximate formula given in the International Atomic Energy Agency's dosimetry code of practice (Technical Report Series No 398, 2000, IAEA, Vienna) and stopping power ratio approximation. The results of these comparisons revealed that most methods were accurate for cases involving thin or low-Z targets. However, only the thick-target formulas provided accurate WET values for targets that were radiologically thick and contained high-Z material.
NASA Astrophysics Data System (ADS)
Shankar Kumar, Ravi; Goswami, A.
2015-06-01
The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.
Random distributed feedback fiber laser at 2.1 μm.
Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin
2016-11-01
We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.
Therapeutic equivalence of budesonide/formoterol delivered via breath-actuated inhaler vs pMDI.
Murphy, Kevin R; Dhand, Rajiv; Trudo, Frank; Uryniak, Tom; Aggarwal, Ajay; Eckerwall, Göran
2015-02-01
To assess equivalence of twice daily (bid) budesonide/formoterol (BUD/FM) 160/4.5 μg via breath-actuated metered-dose inhaler (BAI) versus pressurized metered-dose inhaler (pMDI). This 12-week, double-blind, multicenter, parallel-group study, randomized adolescents and adults (aged ≥12 years) with asthma (and ≥3 months daily use of inhaled corticosteroids) to BUD/FM BAI 2 × 160/4.5 μg bid, BUD/FM pMDI 2 × 160/4.5 μg bid, or BUD pMDI 2 × 160 μg bid. Inclusion required prebronchodilator forced expiratory volume in one second (FEV1) ≥45 to ≤85% predicted, and reversibility of ≥12% in FEV1 (ages 12 to <18 years) or ≥12% and 200 mL (ages ≥18 years). Confirmation that 60-min postdose FEV1 response to BUD/FM pMDI was superior to BUD pMDI was required before equivalence testing. Therapeutic equivalence was shown by treatment effect ratio of BUD/FM BAI vs BUD/FM pMDI on 60-min postdose FEV1 and predose FEV1 within confidence intervals (CIs) of 80-125%. Mean age of 214 randomized patients was 42.7 years. BUD/FM pMDI was superior to BUD pMDI (60-min postdose FEV1 treatment effect ratio, 1.10; 95% CI, 1.06-1.14; p < 0.001). Treatment effect ratios for BUD/FM BAI versus pMDI for 60-min postdose FEV1 (1.01; 95% CI, 0.97-1.05) and predose FEV1 (1.03; 95% CI, 0.99-1.08) were within predetermined CIs for therapeutic equivalence. Adverse event profiles, tolerability, and patient-reported ease of use were similar. BUD/FM 2 × 160/4.5 μg bid BAI is therapeutically equivalent to BUD/FM conventional pMDI. The introduction of BUD/FM BAI would expand options for delivering inhaled corticosteroid/long-acting β2-agonist combination therapy to patients with moderate-to-severe asthma. ClinicalTrials.gov NCT01360021. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
STANDARD STARS AND EMPIRICAL CALIBRATIONS FOR Hα AND Hβ PHOTOMETRY
DOE Office of Scientific and Technical Information (OSTI.GOV)
Joner, Michael D.; Hintz, Eric G., E-mail: joner@byu.edu, E-mail: hintz@byu.edu
2015-12-15
We define an Hα photometric system that is designed as a companion to the well established Hβ index. The new system is built on spectrophotometric observations of field stars as well as stars in benchmark open clusters. We present data for 75 field stars, 12 stars from the Coma star cluster, 24 stars from the Hyades, 17 stars from the Pleiades, and 8 stars from NGC 752 to be used as primary standard stars in the new systems. We show that the system transformations are relatively insensitive to the shape of the filter functions. We make comparisons of the Hαmore » index to the Hβ index and illustrate the relationship between the two systems. In addition, we present relations that relate both hydrogen indices to equivalent width and effective temperature. We derive equations to calibrate both systems for Main Sequence stars with spectral types in the range O9 to K2 for equivalent width and A2 to K2 for effective temperature.« less
[A short form of the positions on nursing diagnosis scale: development and psychometric testing].
Romero-Sánchez, José Manuel; Paloma-Castro, Olga; Paramio-Cuevas, Juan Carlos; Pastor-Montero, Sonia María; O'Ferrall-González, Cristina; Gabaldón-Bravo, Eva Maria; González-Domínguez, Maria Eugenia; Castro-Yuste, Cristina; Frandsen, Anna J; Martínez-Sabater, Antonio
2013-06-01
The Positions on Nursing Diagnosis (PND) is a scale that uses the semantic differential technique to measure nurses' attitudes towards the nursing diagnosis concept. The aim of this study was to develop a shortened form of the Spanish version of this scale and evaluate its psychometric properties and efficiency. A double theoretical-empirical approach was used to obtain a short form of the PND, the PND-7-SV, which would be equivalent to the original. Using a cross-sectional survey design, the reliability (internal consistency and test-retest reliability), construct (exploratory factor analysis, known-groups technique and discriminant validity) and criterion-related validity (concurrent validity), sensitivity to change and efficiency of the PND-7-SV were assessed in a sample of 476 Spanish nursing students. The results endorsed the utility of the PND-7-SV to measure attitudes toward nursing diagnosis in an equivalent manner to the complete form of the scale and in a shorter time.
NASA Astrophysics Data System (ADS)
Guo, Jinyun; Mu, Dapeng; Liu, Xin; Yan, Haoming; Dai, Honglei
2014-08-01
The Level-2 monthly GRACE gravity field models issued by Center for Space Research (CSR), GeoForschungs Zentrum (GFZ), and Jet Propulsion Laboratory (JPL) are treated as observations used to extract the equivalent water height (EWH) with the robust independent component analysis (RICA). The smoothing radii of 300, 400, and 500 km are tested, respectively, in the Gaussian smoothing kernel function to reduce the observation Gaussianity. Three independent components are obtained by RICA in the spatial domain; the first component matches the geophysical signal, and the other two match the north-south strip and the other noises. The first mode is used to estimate EWHs of CSR, JPL, and GFZ, and compared with the classical empirical decorrelation method (EDM). The EWH STDs for 12 months in 2010 extracted by RICA and EDM show the obvious fluctuation. The results indicate that the sharp EWH changes in some areas have an important global effect, like in Amazon, Mekong, and Zambezi basins.
Comparing topography-based verbal behavior with stimulus selection-based verbal behavior
Sundberg, Carl T.; Sundberg, Mark L.
1990-01-01
Michael (1985) distinguished between two types of verbal behavior: topography-based and stimulus selection-based verbal behavior. The current research was designed to empirically examine these two types of verbal behavior while addressing the frequently debated question, Which augmentative communication system should be used with the nonverbal developmentally disabled person? Four mentally retarded adults served as subjects. Each subject was taught to tact an object by either pointing to its corresponding symbol (selection-based verbal behavior), or making the corresponding sign (topography-based verbal behavior). They were then taught an intraverbal relation, and were tested for the emergence of stimulus equivalence relations. The results showed that signed responses were acquired more readily than pointing responses as measured by the acquisition of tacts and intraverbals, and the formation of equivalence classes. These results support Michael's (1985) analysis, and have important implications for the design of language intervention programs for the developmentally disabled. ImagesFig. 1Fig. 2 PMID:22477602
A compact D-band monolithic APDP-based sub-harmonic mixer
NASA Astrophysics Data System (ADS)
Zhang, Shengzhou; Sun, Lingling; Wang, Xiang; Wen, Jincai; Liu, Jun
2017-11-01
The paper presents a compact D-band monolithic sub-harmonic mixer (SHM) with 3 μm planar hyperabrupt schottky-varactor diodes offered by 70 nm GaAs mHEMT technology. According to empirical equivalent-circuit models, a wide-band large signal equivalent circuit model of the diode is proposed. Based on the extracted model, the mixer is implemented and optimized with a shunt-mounted anti-parallel diode pair (APDP) to fulfill the sub-harmonic mixing mechanism. Furthermore, a modified asymmetric three-transmission-line coupler is devised to achieve high-level coupling and minimize the chip size. The measured results show that the conversion gain varies between -13.9 dB and -17.5 dB from 110 GHz to 145 GHz, with a local oscillator (LO) power level of 14 dBm and an intermediate frequency (IF) of 1 GHz. The total chip size including probe GSG pads is 0.57 × 0.68mm2. In conclusion, the mixer exhibits outstanding figure-of-merits.
NASA Astrophysics Data System (ADS)
Pfeilsticker, K.; Davis, A.; Marshak, A.; Suszcynsky, D. M.; Buldryrev, S.; Barker, H.
2001-12-01
2-stream RT models, as used in all current GCMs, are mathematically equivalent to standard diffusion theory where the physical picture is a slow propagation of the diffuse radiation by Gaussian random walks. In other words, after the conventional van de Hulst rescaling by 1/(1-g) in R3 and also by (1-g) in t, solar photons follow convoluted fractal trajectories in the atmosphere. For instance, we know that transmitted light is typically scattered about (1-g)τ 2 times while reflected light is scattered on average about τ times, where τ is the optical depth of the column. The space/time spread of this diffusion process is described exactly by a Gaussian distribution; from the statistical physics viewpoint, this follows from the convergence of the sum of many (rescaled) steps between scattering events with a finite variance. This Gaussian picture follows from directly from first principles (the RT equation) under the assumptions of horizontal uniformity and large optical depth, i.e., there is a homogeneous plane-parallel cloud somewhere in the column. The first-order effect of 3D variability of cloudiness, the main source of scattering, is to perturb the distribution of single steps between scatterings which, modulo the '1-g' rescaling, can be assumed effectively isotropic. The most natural generalization of the Gaussian distribution is the 1-parameter family of symmetric Lévy-stable distributions because the sum of many zero-mean random variables with infinite variance, but finite moments of order q < α (0 < α < 2), converge to them. It has been shown on heuristic grounds that for these Lévy-based random walks the typical number of scatterings is now (1-g)τ α for transmitted light. The appearance of a non-rational exponent is why this is referred to as anomalous diffusion. Note that standard/Gaussian diffusion is retrieved in the limit α = 2-. Lévy transport theory has been successfully used in the statistical physics to investigate a wide variety of systems with strongly nonlinear dynamics; these applications range from random advection in turbulent fluids to the erratic behavior of financial time-series and, most recently, self-regulating ecological systems. We will briefly survey the state-of-the-art observations that offer compelling empirical support for the Lévy/anomalous diffusion model in atmospheric radiation: (1) high-resolution spectroscopy of differential absorption in the O2 A-band from ground; (2) temporal transient records of lightning strokes transmitted through clouds to a sensitive detector in space; and (3) the Gamma-distributions of optical depths derived from Landsat cloud scenes at 30-m resolution. We will then introduce a rigorous analytical formulation of anomalous transport through finite media based on fractional derivatives and Sonin calculus. A remarkable result from this new theoretical development is an extremal property of the α = 1+ case (divergent mean-free-path), as is observed in the cloudy atmosphere. Finally, we will discuss the implications of anomalous transport theory for bulk 3D effects on the current enhanced absorption problem as well as its role as the basis of a next-generation GCM RT parameterization.
NASA Astrophysics Data System (ADS)
Moyer, Steve; Uhl, Elizabeth R.
2015-05-01
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying and modeling the human visual discrimination process as it pertains to military imaging systems. In order to develop sensor performance models, human observers are trained to expert levels in the identification of military vehicles. From 1998 until 2006, the experimental stimuli were block randomized, meaning that stimuli with similar difficulty levels (for example, in terms of distance from target, blur, noise, etc.) were presented together in blocks of approximately 24 images but the order of images within the block was random. Starting in 2006, complete randomization came into vogue, meaning that difficulty could change image to image. It was thought that this would provide a more statistically robust result. In this study we investigated the impact of the two types of randomization on performance in two groups of observers matched for skill to create equivalent groups. It is hypothesized that Soldiers in the Complete Randomized condition will have to shift their decision criterion more frequently than Soldiers in the Block Randomization group and this shifting is expected to impede performance so that Soldiers in the Block Randomized group perform better.
Force limits measured on a space shuttle flight
NASA Technical Reports Server (NTRS)
Scharton, T.
2000-01-01
The random vibration forces between a payload and the sidewall of the space shuttle have been measured in flight and compared with the force specifications used in ground vibration tests. The flight data are in agreement with a semi-empirical method, which is widely used to predict vibration test force limits.
ERIC Educational Resources Information Center
Hirsch, Shanna Eisner; Kennedy, Michael J.; Haines, Shana J.; Thomas, Cathy Newman; Alves, Kat D.
2015-01-01
Functional behavioral assessment (FBA) is an empirically supported intervention associated with decreasing problem behavior and increasing appropriate behavior. To date, few studies have examined multimedia approaches to FBA training. This paper provides the outcomes of a randomized controlled trial across three university sites and evaluates…
ERIC Educational Resources Information Center
Richardson, Ian M.
1990-01-01
A possible syllabus for English for Science and Technology is suggested based upon a set of causal relations, arising from a logical description of the presuppositional rhetoric of scientific passages that underlie most semantic functions. An empirical study is reported of the semantic functions present in 52 randomly selected passages.…
Schools and Drug Markets: Examining the Relationship between Schools and Neighborhood Drug Crime
ERIC Educational Resources Information Center
Willits, Dale; Broidy, Lisa M.; Denman, Kristine
2015-01-01
Research on drug markets indicates that they are not randomly distributed. Instead they are concentrated around specific types of places. Theoretical and empirical literature implicates routine activities and social disorganization processes in this distribution. In the current study, we examine whether, consistent with these theories, drug…
Empirically Driven Variable Selection for the Estimation of Causal Effects with Observational Data
ERIC Educational Resources Information Center
Keller, Bryan; Chen, Jianshen
2016-01-01
Observational studies are common in educational research, where subjects self-select or are otherwise non-randomly assigned to different interventions (e.g., educational programs, grade retention, special education). Unbiased estimation of a causal effect with observational data depends crucially on the assumption of ignorability, which specifies…
Essays on Policy Evaluation with Endogenous Adoption
ERIC Educational Resources Information Center
Gentile, Elisabetta
2011-01-01
Over the last decade, experimental and quasi-experimental methods have been favored by researchers in empirical economics, as they provide unbiased causal estimates. However, when implementing a program, it is often not possible to randomly assign subjects to treatment, leading to a possible endogeneity bias. This dissertation consists of two…
The Vocational Personality of School Psychologists in the United States
ERIC Educational Resources Information Center
Toomey, Kristine D.; Levinson, Edward M.; Morrison, Takea J.
2008-01-01
This study represents the first empirical test of the vocational personality of US school psychologists. Specifically, we investigated the personality of school psychologists using Holland's (1997) well-researched theory of vocational personalities and work environments. The sample consisted of 241 randomly selected members of the National…
Change in Sense of Community: An Empirical Finding
ERIC Educational Resources Information Center
Loomis, Colleen; Dockett, Kathleen H.; Brodsky, Anne E.
2004-01-01
This study investigated changes in students' psychological sense of community (SOC) under two conditions of external threat against their urban, historically Black, public nonresidential university in a U.S. mid-Atlantic city. Two independent stratified random samples (N = 801 and N = 241) consisting of undergraduate and graduate women (61%) and…
Capturing Students' Attention: An Empirical Study
ERIC Educational Resources Information Center
Rosegard, Erik; Wilson, Jackson
2013-01-01
College students ("n" = 846) enrolled in a general education course were randomly assigned to either an arousal (experimental) or no-arousal (control) group. The experimental group was exposed to a topic-relevant, 90-second external stimulus (a technique used to elevate arousal and focus attention). The control group listened to the…
Do Social Workers Make Better Child Welfare Workers than Non-Social Workers?
ERIC Educational Resources Information Center
Perry, Robin E.
2006-01-01
Objective: To empirically examine whether the educational background of child welfare workers in Florida impacts on performance evaluations of their work. Method: A proportionate, stratified random sample of supervisor and peer evaluations of child protective investigators and child protective service workers is conducted. ANOVA procedures are…
ERIC Educational Resources Information Center
Weiss, Michael J.; Mayer, Alexander; Cullinan, Dan; Ratledge, Alyssa; Sommo, Colleen; Diamond, John
2014-01-01
Empirical evidence confirms that increased education is positively associated with higher earnings across a wide spectrum of fields and student demographics (Barrow & Rouse, 2005; Card, 2001; Carneiro, Heckman, & Vytlacil, 2011; Dadgar & Weiss, 2012; Dynarski, 2008; Jacobson & Mokher, 2009; Jepsen, Troske, & Coomes, 2009; Kane…
Designing Authentic Learning Tasks for Online Library Instruction
ERIC Educational Resources Information Center
Finch, Jannette L.; Jefferson, Renee N.
2013-01-01
This empirical study explores whether authentic tasks designed specifically for deliberately grouped students have an effect on student perception of teaching presence and student cognitive gains. In one library research class offered in an express session online, the instructor grouped students randomly. In a second online library research class,…
Pettigrew, Jonathan; Miller-Day, Michelle; Krieger, Janice L.; Zhou, Jiangxiu; Hecht, Michael L.
2014-01-01
Random assignment to groups is the foundation for scientifically rigorous clinical trials. But assignment is challenging in group randomized trials when only a few units (schools) are assigned to each condition. In the DRSR project, we assigned 39 rural Pennsylvania and Ohio schools to three conditions (rural, classic, control). But even with 13 schools per condition, achieving pretest equivalence on important variables is not guaranteed. We collected data on six important school-level variables: rurality, number of grades in the school, enrollment per grade, percent white, percent receiving free/assisted lunch, and test scores. Key to our procedure was the inclusion of school-level drug use data, available for a subset of the schools. Also, key was that we handled the partial data with modern missing data techniques. We chose to create one composite stratifying variable based on the seven school-level variables available. Principal components analysis with the seven variables yielded two factors, which were averaged to form the composite inflate-suppress (CIS) score which was the basis of stratification. The CIS score was broken into three strata within each state; schools were assigned at random to the three program conditions from within each stratum, within each state. Results showed that program group membership was unrelated to the CIS score, the two factors making up the CIS score, and the seven items making up the factors. Program group membership was not significantly related to pretest measures of drug use (alcohol, cigarettes, marijuana, chewing tobacco; smallest p>.15), thus verifying that pretest equivalence was achieved. PMID:23722619
NASA Technical Reports Server (NTRS)
Ingels, F. M.; Mo, C. D.
1978-01-01
An empirical study of the performance of the Viterbi decoders in bursty channels was carried out and an improved algebraic decoder for nonsystematic codes was developed. The hybrid algorithm was simulated for the (2,1), k = 7 code on a computer using 20 channels having various error statistics, ranging from pure random error to pure bursty channels. The hybrid system outperformed both the algebraic and the Viterbi decoders in every case, except the 1% random error channel where the Viterbi decoder had one bit less decoding error.
A unified development of several techniques for the representation of random vectors and data sets
NASA Technical Reports Server (NTRS)
Bundick, W. T.
1973-01-01
Linear vector space theory is used to develop a general representation of a set of data vectors or random vectors by linear combinations of orthonormal vectors such that the mean squared error of the representation is minimized. The orthonormal vectors are shown to be the eigenvectors of an operator. The general representation is applied to several specific problems involving the use of the Karhunen-Loeve expansion, principal component analysis, and empirical orthogonal functions; and the common properties of these representations are developed.
Seismic random noise attenuation method based on empirical mode decomposition of Hausdorff dimension
NASA Astrophysics Data System (ADS)
Yan, Z.; Luan, X.
2017-12-01
Introduction Empirical mode decomposition (EMD) is a noise suppression algorithm by using wave field separation, which is based on the scale differences between effective signal and noise. However, since the complexity of the real seismic wave field results in serious aliasing modes, it is not ideal and effective to denoise with this method alone. Based on the multi-scale decomposition characteristics of the signal EMD algorithm, combining with Hausdorff dimension constraints, we propose a new method for seismic random noise attenuation. First of all, We apply EMD algorithm adaptive decomposition of seismic data and obtain a series of intrinsic mode function (IMF)with different scales. Based on the difference of Hausdorff dimension between effectively signals and random noise, we identify IMF component mixed with random noise. Then we use threshold correlation filtering process to separate the valid signal and random noise effectively. Compared with traditional EMD method, the results show that the new method of seismic random noise attenuation has a better suppression effect. The implementation process The EMD algorithm is used to decompose seismic signals into IMF sets and analyze its spectrum. Since most of the random noise is high frequency noise, the IMF sets can be divided into three categories: the first category is the effective wave composition of the larger scale; the second category is the noise part of the smaller scale; the third category is the IMF component containing random noise. Then, the third kind of IMF component is processed by the Hausdorff dimension algorithm, and the appropriate time window size, initial step and increment amount are selected to calculate the Hausdorff instantaneous dimension of each component. The dimension of the random noise is between 1.0 and 1.05, while the dimension of the effective wave is between 1.05 and 2.0. On the basis of the previous steps, according to the dimension difference between the random noise and effective signal, we extracted the sample points, whose fractal dimension value is less than or equal to 1.05 for the each IMF components, to separate the residual noise. Using the IMF components after dimension filtering processing and the effective wave IMF components after the first selection for reconstruction, we can obtained the results of de-noising.
Modeling depth from motion parallax with the motion/pursuit ratio
Nawrot, Mark; Ratzlaff, Michael; Leonard, Zachary; Stroyan, Keith
2014-01-01
The perception of unambiguous scaled depth from motion parallax relies on both retinal image motion and an extra-retinal pursuit eye movement signal. The motion/pursuit ratio represents a dynamic geometric model linking these two proximal cues to the ratio of depth to viewing distance. An important step in understanding the visual mechanisms serving the perception of depth from motion parallax is to determine the relationship between these stimulus parameters and empirically determined perceived depth magnitude. Observers compared perceived depth magnitude of dynamic motion parallax stimuli to static binocular disparity comparison stimuli at three different viewing distances, in both head-moving and head-stationary conditions. A stereo-viewing system provided ocular separation for stereo stimuli and monocular viewing of parallax stimuli. For each motion parallax stimulus, a point of subjective equality (PSE) was estimated for the amount of binocular disparity that generates the equivalent magnitude of perceived depth from motion parallax. Similar to previous results, perceived depth from motion parallax had significant foreshortening. Head-moving conditions produced even greater foreshortening due to the differences in the compensatory eye movement signal. An empirical version of the motion/pursuit law, termed the empirical motion/pursuit ratio, which models perceived depth magnitude from these stimulus parameters, is proposed. PMID:25339926
Developing Business Writing Skills and Reducing Writing Anxiety of EFL Learners through Wikis
ERIC Educational Resources Information Center
Kassem, Mohamed Ali Mohamed
2017-01-01
The present study aimed at investigating the effect of using wikis on developing business writing skills and reducing writing anxiety of Business Administration students at Prince Sattam bin Abdul Aziz University, KSA. Sixty students, who were randomly chosen and divided into two equivalent groups: control and experimental, participated in the…
Impact of Thematic Approach on Communication Skill in Preschool
ERIC Educational Resources Information Center
Ashokan, Varun; Venugopal, Kalpana
2016-01-01
The study investigated the effects of thematic approach on communication skills for preschool children. The study was a quasi experimental non-equivalent pretest-post-test control group design whereby 5-6 year old preschool children (n = 49) were randomly assigned to an experimental and a control group. The experimental group students were exposed…
Anxiety and Self-Concept Among American and Chinese College Students
ERIC Educational Resources Information Center
Paschal, Billy J.; You-Yuh, Kuo
1973-01-01
In this study, 60 pairs of Ss were randomly selected and individually matched on age, sex, grade equivalence, and birth order. The seven null hypotheses dealt with culture, sex, birth order, and their interactions. The main self-rating scales employed were the IPAT Anxiety Scale and the Tennessee Self Concept Scale. (Author/EK)
Regression Discontinuity Design in Gifted and Talented Education Research
ERIC Educational Resources Information Center
Matthews, Michael S.; Peters, Scott J.; Housand, Angela M.
2012-01-01
This Methodological Brief introduces the reader to the regression discontinuity design (RDD), which is a method that when used correctly can yield estimates of research treatment effects that are equivalent to those obtained through randomized control trials and can therefore be used to infer causality. However, RDD does not require the random…
Gender-Based Differential Item Performance in Mathematics Achievement Items.
ERIC Educational Resources Information Center
Doolittle, Allen E.; Cleary, T. Anne
1987-01-01
Eight randomly equivalent samples of high school seniors were each given a unique form of the ACT Assessment Mathematics Usage Test (ACTM). Signed measures of differential item performance (DIP) were obtained for each item in the eight ACTM forms. DIP estimates were analyzed and a significant item category effect was found. (Author/LMO)
ERIC Educational Resources Information Center
Ruble, Lisa; McGrew, John H.; Toland, Michael D.
2012-01-01
Goal attainment scaling (GAS) holds promise as an idiographic approach for measuring outcomes of psychosocial interventions in community settings. GAS has been criticized for untested assumptions of scaling level (i.e., interval or ordinal), inter-individual equivalence and comparability, and reliability of coding across different behavioral…
Outcomes of Parent Education Programs Based on Reevaluation Counseling
ERIC Educational Resources Information Center
Wolfe, Randi B.; Hirsch, Barton J.
2003-01-01
We report two studies in which a parent education program based on Reevaluation Counseling was field-tested on mothers randomly assigned to treatment groups or equivalent, no-treatment comparison groups. The goal was to evaluate the program's viability, whether there were measurable effects, whether those effects were sustained over time, and…
Using Kernel Equating to Assess Item Order Effects on Test Scores
ERIC Educational Resources Information Center
Moses, Tim; Yang, Wen-Ling; Wilson, Christine
2007-01-01
This study explored the use of kernel equating for integrating and extending two procedures proposed for assessing item order effects in test forms that have been administered to randomly equivalent groups. When these procedures are used together, they can provide complementary information about the extent to which item order effects impact test…
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
ERIC Educational Resources Information Center
Wang, Shudong; Wang, Ning; Hoadley, David
2007-01-01
This study used confirmatory factor analysis (CFA) to examine the comparability of the National Nurse Aide Assessment Program (NNAAP[TM]) test scores across language and administration condition groups for calibration and validation samples that were randomly drawn from the same population. Fit statistics supported both the calibration and…
Risko, Evan F.; Laidlaw, Kaitlin E. W.; Freeth, Megan; Foulsham, Tom; Kingstone, Alan
2012-01-01
Cognitive neuroscientists often study social cognition by using simple but socially relevant stimuli, such as schematic faces or images of other people. Whilst this research is valuable, important aspects of genuine social encounters are absent from these studies, a fact that has recently drawn criticism. In the present review we argue for an empirical approach to the determination of the equivalence of different social stimuli. This approach involves the systematic comparison of different types of social stimuli ranging in their approximation to a real social interaction. In garnering support for this cognitive ethological approach, we focus on recent research in social attention that has involved stimuli ranging from simple schematic faces to real social interactions. We highlight both meaningful similarities and differences in various social attentional phenomena across these different types of social stimuli thus validating the utility of the research initiative. Furthermore, we argue that exploring these similarities and differences will provide new insights into social cognition and social neuroscience. PMID:22654747
Klotz, Dino; Grave, Daniel A; Dotan, Hen; Rothschild, Avner
2018-03-15
Photoelectrochemical impedance spectroscopy (PEIS) is a useful tool for the characterization of photoelectrodes for solar water splitting. However, the analysis of PEIS spectra often involves a priori assumptions that might bias the results. This work puts forward an empirical method that analyzes the distribution of relaxation times (DRT), obtained directly from the measured PEIS spectra of a model hematite photoanode. By following how the DRT evolves as a function of control parameters such as the applied potential and composition of the electrolyte solution, we obtain unbiased insights into the underlying mechanisms that shape the photocurrent. In a subsequent step, we fit the data to a process-oriented equivalent circuit model (ECM) whose makeup is derived from the DRT analysis in the first step. This yields consistent quantitative trends of the dominant polarization processes observed. Our observations reveal a common step for the photo-oxidation reactions of water and H 2 O 2 in alkaline solution.
Skyshine photon doses from 6 and 10 MV medical linear accelerators.
de Paiva, Eduardo; da Rosa, Luiz A R
2012-01-05
The skyshine radiation phenomenon consists of the scattering of primary photon beams in the atmosphere above the roof of a medical linear accelerator facility, generating an additional dose at ground level in the vicinity of the treatment room. Thus, with respect to radioprotection, this situation plays an important role when the roof is designed with little shielding and there are buildings next to the radiotherapy treatment room. In literature, there are few reported skyshine-measured doses and these contain poor agreement with empirical calculations. In this work, we carried out measurements of skyshine photon dose rates produced from eight different 6 and 10 MV medical accelerators. Each measurement was performed outside the room facility, with the beam positioned in the upward direction, at a horizontal distance from the target and for a 40 cm × 40 cm maximum photon field size at the accelerator isocenter. Measured dose-equivalent rates results were compared with calculations obtained by an empirical expression, and differences between them deviated in one or more order of magnitude.
Determination of astrophysical parameters of quasars within the Gaia mission
NASA Astrophysics Data System (ADS)
Delchambre, L.
2018-01-01
We describe methods designed to determine the astrophysical parameters of quasars based on spectra coming from the red and blue spectrophotometers of the Gaia satellite. These methods principally rely on two already published algorithms that are the weighted principal component analysis and the weighted phase correlation. The presented approach benefits from a fast implementation, an intuitive interpretation as well as strong diagnostic tools on the potential errors that may arise during predictions. The production of a semi-empirical library of spectra as they will be observed by Gaia is also covered and subsequently used for validation purpose. We detail the pre-processing that is necessary in order for these spectra to be fully exploitable by our algorithms along with the procedures that are used to predict the redshifts of the quasars, their continuum slopes, the total equivalent width of their emission lines and whether these are broad absorption line (BAL) quasars or not. Performances of these procedures were assessed in comparison with the extremely randomized trees learning method and were proven to provide better results on the redshift predictions and on the ratio of correctly classified observations though the probability of detection of BAL quasars remains restricted by the low resolution of these spectra as well as by their limited signal-to-noise ratio. Finally, the triggering of some warning flags allows us to obtain an extremely pure subset of redshift predictions where approximately 99 per cent of the observations come along with absolute errors that are below 0.1.
Using dolls for therapeutic purposes: A study on nursing home residents with severe dementia.
Cantarella, A; Borella, E; Faggian, S; Navuzzi, A; De Beni, R
2018-04-19
Among the psychosocial interventions intended to reduce the behavioral and psychological symptoms of dementia (BPSD), doll therapy (DT) is increasingly used in clinical practice. Few studies on DT have been based on empirical data obtained with an adequate procedure; however, none have assessed its efficacy using an active control group, and the scales used to assess changes in BPSD are usually unreliable. The aim of the present study was to measure the impact of DT on people with severe dementia with a reliable, commonly used scale for assessing their BPSD, and the related distress in formal caregivers. Effects of DT on the former's everyday abilities (ie, eating behavior) were also examined. Twenty-nine nursing home residents aged from 76 to 96 years old, with severe dementia (Alzheimer's or vascular dementia), took part in the experiment. They were randomly assigned to an experimental group that used dolls or an active control group that used hand warmers with sensory characteristics equivalent to the dolls. Benefits of DT on BPSD and related formal caregiver distress were examined with the Neuropsychiatric Inventory. The effects of DT on eating behavior were examined with the Eating Behavior Scale. Only the DT group showed a reduction in BPSD scores and related caregiver distress. DT did not benefit eating behavior, however. This study suggests that DT is a promising approach for reducing BPSD in people with dementia, supporting evidence emerging from previous anecdotal studies. Copyright © 2018 John Wiley & Sons, Ltd.
Maintaining homeostasis by decision-making.
Korn, Christoph W; Bach, Dominik R
2015-05-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that--in both the foraging and the casino frames--participants' choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization.
Maintaining Homeostasis by Decision-Making
Korn, Christoph W.; Bach, Dominik R.
2015-01-01
Living organisms need to maintain energetic homeostasis. For many species, this implies taking actions with delayed consequences. For example, humans may have to decide between foraging for high-calorie but hard-to-get, and low-calorie but easy-to-get food, under threat of starvation. Homeostatic principles prescribe decisions that maximize the probability of sustaining appropriate energy levels across the entire foraging trajectory. Here, predictions from biological principles contrast with predictions from economic decision-making models based on maximizing the utility of the endpoint outcome of a choice. To empirically arbitrate between the predictions of biological and economic models for individual human decision-making, we devised a virtual foraging task in which players chose repeatedly between two foraging environments, lost energy by the passage of time, and gained energy probabilistically according to the statistics of the environment they chose. Reaching zero energy was framed as starvation. We used the mathematics of random walks to derive endpoint outcome distributions of the choices. This also furnished equivalent lotteries, presented in a purely economic, casino-like frame, in which starvation corresponded to winning nothing. Bayesian model comparison showed that—in both the foraging and the casino frames—participants’ choices depended jointly on the probability of starvation and the expected endpoint value of the outcome, but could not be explained by economic models based on combinations of statistical moments or on rank-dependent utility. This implies that under precisely defined constraints biological principles are better suited to explain human decision-making than economic models based on endpoint utility maximization. PMID:26024504
NASA Astrophysics Data System (ADS)
James, Ryan G.; Mahoney, John R.; Crutchfield, James P.
2017-06-01
One of the most basic characterizations of the relationship between two random variables, X and Y , is the value of their mutual information. Unfortunately, calculating it analytically and estimating it empirically are often stymied by the extremely large dimension of the variables. One might hope to replace such a high-dimensional variable by a smaller one that preserves its relationship with the other. It is well known that either X (or Y ) can be replaced by its minimal sufficient statistic about Y (or X ) while preserving the mutual information. While intuitively reasonable, it is not obvious or straightforward that both variables can be replaced simultaneously. We demonstrate that this is in fact possible: the information X 's minimal sufficient statistic preserves about Y is exactly the information that Y 's minimal sufficient statistic preserves about X . We call this procedure information trimming. As an important corollary, we consider the case where one variable is a stochastic process' past and the other its future. In this case, the mutual information is the channel transmission rate between the channel's effective states. That is, the past-future mutual information (the excess entropy) is the amount of information about the future that can be predicted using the past. Translating our result about minimal sufficient statistics, this is equivalent to the mutual information between the forward- and reverse-time causal states of computational mechanics. We close by discussing multivariate extensions to this use of minimal sufficient statistics.
Map LineUps: Effects of spatial structure on graphical inference.
Beecham, Roger; Dykes, Jason; Meulemans, Wouter; Slingsby, Aidan; Turkay, Cagatay; Wood, Jo
2017-01-01
Fundamental to the effective use of visualization as an analytic and descriptive tool is the assurance that presenting data visually provides the capability of making inferences from what we see. This paper explores two related approaches to quantifying the confidence we may have in making visual inferences from mapped geospatial data. We adapt Wickham et al.'s 'Visual Line-up' method as a direct analogy with Null Hypothesis Significance Testing (NHST) and propose a new approach for generating more credible spatial null hypotheses. Rather than using as a spatial null hypothesis the unrealistic assumption of complete spatial randomness, we propose spatially autocorrelated simulations as alternative nulls. We conduct a set of crowdsourced experiments (n=361) to determine the just noticeable difference (JND) between pairs of choropleth maps of geographic units controlling for spatial autocorrelation (Moran's I statistic) and geometric configuration (variance in spatial unit area). Results indicate that people's abilities to perceive differences in spatial autocorrelation vary with baseline autocorrelation structure and the geometric configuration of geographic units. These results allow us, for the first time, to construct a visual equivalent of statistical power for geospatial data. Our JND results add to those provided in recent years by Klippel et al. (2011), Harrison et al. (2014) and Kay & Heer (2015) for correlation visualization. Importantly, they provide an empirical basis for an improved construction of visual line-ups for maps and the development of theory to inform geospatial tests of graphical inference.
Schumann, Anja; John, Ulrich; Ulbricht, Sabina; Rüge, Jeannette; Bischof, Gallus; Meyer, Christian
2008-11-01
This study examines tailored feedback letters of a smoking cessation intervention that is conceptually based on the transtheoretical model, from a content-based perspective. Data of 2 population-based intervention studies, both randomized controlled trials, with total N=1044 were used. The procedure of the intervention, the tailoring principle for the feedback letters, and the content of the intervention materials are described in detail. Theoretical and empirical frequencies of unique feedback letters are presented. The intervention system was able to generate a total of 1040 unique letters with normative feedback only, and almost half a million unique letters with normative and ipsative feedback. Almost every single smoker in contemplation, preparation, action, and maintenance had an empirically unique combination of tailoring variables and received a unique letter. In contrast, many smokers in precontemplation shared a combination of tailoring variables and received identical letters. The transtheoretical model provides an enormous theoretical and empirical variability of tailoring. However, tailoring for a major subgroup of smokers, i.e. those who do not intend to quit, needs improvement. Conceptual ideas for additional tailoring variables are discussed.
Localization in random bipartite graphs: Numerical and empirical study
NASA Astrophysics Data System (ADS)
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
Localization in random bipartite graphs: Numerical and empirical study.
Slanina, František
2017-05-01
We investigate adjacency matrices of bipartite graphs with a power-law degree distribution. Motivation for this study is twofold: first, vibrational states in granular matter and jammed sphere packings; second, graphs encoding social interaction, especially electronic commerce. We establish the position of the mobility edge and show that it strongly depends on the power in the degree distribution and on the ratio of the sizes of the two parts of the bipartite graph. At the jamming threshold, where the two parts have the same size, localization vanishes. We found that the multifractal spectrum is nontrivial in the delocalized phase, but still near the mobility edge. We also study an empirical bipartite graph, namely, the Amazon reviewer-item network. We found that in this specific graph the mobility edge disappears, and we draw a conclusion from this fact regarding earlier empirical studies of the Amazon network.
Application of the Semi-Empirical Force-Limiting Approach for the CoNNeCT SCAN Testbed
NASA Technical Reports Server (NTRS)
Staab, Lucas D.; McNelis, Mark E.; Akers, James C.; Suarez, Vicente J.; Jones, Trevor M.
2012-01-01
The semi-empirical force-limiting vibration method was developed and implemented for payload testing to limit the structural impedance mismatch (high force) that occurs during shaker vibration testing. The method has since been extended for use in analytical models. The Space Communications and Navigation Testbed (SCAN Testbed), known at NASA as, the Communications, Navigation, and Networking re-Configurable Testbed (CoNNeCT), project utilized force-limiting testing and analysis following the semi-empirical approach. This paper presents the steps in performing a force-limiting analysis and then compares the results to test data recovered during the CoNNeCT force-limiting random vibration qualification test that took place at NASA Glenn Research Center (GRC) in the Structural Dynamics Laboratory (SDL) December 19, 2010 to January 7, 2011. A compilation of lessons learned and considerations for future force-limiting tests is also included.
[Is it still the "royal way"? The dream as a junction of neurobiology and psychoanalysis].
Simon, Mária
2011-01-01
Some decades ago the dream seemed to be randomly generated by brain stem mechanisms in the cortical and subcortical neuronal networks. However, most recent empirical data, studies on brain lesions and functional neuroimaging results have refuted this theory. Several data support that motivation pathways, memory systems, especially implicit, emotional memory play an important role in dream formation. This essay reviews how the results of neurobiology and cognitive psychology can be fitted into the theoretical frameworks and clinical practice of the psychoanalysis. The main aim is to demonstrate that results of neurobiology and empirical observations of psychoanalysis are complementary rather than contradictory.
Clustering in complex directed networks
NASA Astrophysics Data System (ADS)
Fagiolo, Giorgio
2007-08-01
Many empirical networks display an inherent tendency to cluster, i.e., to form circles of connected nodes. This feature is typically measured by the clustering coefficient (CC). The CC, originally introduced for binary, undirected graphs, has been recently generalized to weighted, undirected networks. Here we extend the CC to the case of (binary and weighted) directed networks and we compute its expected value for random graphs. We distinguish between CCs that count all directed triangles in the graph (independently of the direction of their edges) and CCs that only consider particular types of directed triangles (e.g., cycles). The main concepts are illustrated by employing empirical data on world-trade flows.
NASA Technical Reports Server (NTRS)
Torres-Pomales, Wilfredo
2014-01-01
This report describes a modeling and simulation approach for disturbance patterns representative of the environment experienced by a digital system in an electromagnetic reverberation chamber. The disturbance is modeled by a multi-variate statistical distribution based on empirical observations. Extended versions of the Rejection Samping and Inverse Transform Sampling techniques are developed to generate multi-variate random samples of the disturbance. The results show that Inverse Transform Sampling returns samples with higher fidelity relative to the empirical distribution. This work is part of an ongoing effort to develop a resilience assessment methodology for complex safety-critical distributed systems.
Exchangeability, extreme returns and Value-at-Risk forecasts
NASA Astrophysics Data System (ADS)
Huang, Chun-Kai; North, Delia; Zewotir, Temesgen
2017-07-01
In this paper, we propose a new approach to extreme value modelling for the forecasting of Value-at-Risk (VaR). In particular, the block maxima and the peaks-over-threshold methods are generalised to exchangeable random sequences. This caters for the dependencies, such as serial autocorrelation, of financial returns observed empirically. In addition, this approach allows for parameter variations within each VaR estimation window. Empirical prior distributions of the extreme value parameters are attained by using resampling procedures. We compare the results of our VaR forecasts to that of the unconditional extreme value theory (EVT) approach and the conditional GARCH-EVT model for robust conclusions.
The beta distribution: A statistical model for world cloud cover
NASA Technical Reports Server (NTRS)
Falls, L. W.
1973-01-01
Much work has been performed in developing empirical global cloud cover models. This investigation was made to determine an underlying theoretical statistical distribution to represent worldwide cloud cover. The beta distribution with probability density function is given to represent the variability of this random variable. It is shown that the beta distribution possesses the versatile statistical characteristics necessary to assume the wide variety of shapes exhibited by cloud cover. A total of 160 representative empirical cloud cover distributions were investigated and the conclusion was reached that this study provides sufficient statical evidence to accept the beta probability distribution as the underlying model for world cloud cover.
A functional renormalization method for wave propagation in random media
NASA Astrophysics Data System (ADS)
Lamagna, Federico; Calzetta, Esteban
2017-08-01
We develop the exact renormalization group approach as a way to evaluate the effective speed of the propagation of a scalar wave in a medium with random inhomogeneities. We use the Martin-Siggia-Rose formalism to translate the problem into a non equilibrium field theory one, and then consider a sequence of models with a progressively lower infrared cutoff; in the limit where the cutoff is removed we recover the problem of interest. As a test of the formalism, we compute the effective dielectric constant of an homogeneous medium interspersed with randomly located, interpenetrating bubbles. A simple approximation to the renormalization group equations turns out to be equivalent to a self-consistent two-loops evaluation of the effective dielectric constant.
Turner, James D; Henshaw, Daryl S; Weller, Robert S; Jaffe, J Douglas; Edwards, Christopher J; Reynolds, J Wells; Russell, Gregory B; Dobson, Sean W
2018-05-08
To determine whether perineural dexamethasone prolongs peripheral nerve blockade (PNB) when measured objectively; and to determine if a 1 mg and 4 mg dose provide equivalent PNB prolongation compared to PNB without dexamethasone. Multiple studies have reported that perineural dexamethasone added to local anesthetics (LA) can prolong PNB. However, these studies have relied on subjective end-points to quantify PNB duration. The optimal dose remains unknown. We hypothesized that 1 mg of perineural dexamethasone would be equivalent in prolonging an adductor canal block (ACB) when compared to 4 mg of dexamethasone, and that both doses would be superior to an ACB performed without dexamethasone. This was a prospective, randomized, double-blind, placebo-controlled equivalency trial involving 85 patients undergoing a unicompartmental knee arthroplasty. All patients received an ACB with 20 ml of 0.25% bupivacaine with 1:400,000 epinephrine. Twelve patients had 0 mg of dexamethasone (placebo) added to the LA mixture; 36 patients had 1 mg of dexamethasone in the LA; and 37 patients had 4 mg of dexamethasone in the LA. The primary outcome was block duration determined by serial neurologic pinprick examinations. Secondary outcomes included time to first analgesic, serial pain scores, and cumulative opioid consumption. The 1 mg (31.8 ± 10.5 h) and 4 mg (37.9 ± 10 h) groups were not equivalent, TOST [Mean difference (95% CI); 6.1 (-10.5, -2.3)]. Also, the 4 mg group was superior to the 1 mg group (p-value = 0.035), and the placebo group (29.7 ± 6.8 h, p-value = 0.011). There were no differences in opioid consumption or time to analgesic request; however, some pain scores were significantly lower in the dexamethasone groups when compared to placebo. Dexamethasone 4 mg, but not 1 mg, prolonged the duration of an ACB when measured by serial neurologic pinprick exams. NCT02462148. Copyright © 2018 Elsevier Inc. All rights reserved.
Peters, John C; Beck, Jimikaye; Cardel, Michelle; Wyatt, Holly R; Foster, Gary D; Pan, Zhaoxing; Wojtanowski, Alexis C; Vander Veur, Stephanie S; Herring, Sharon J; Brill, Carrie; Hill, James O
2016-02-01
To evaluate the effects of water versus beverages sweetened with non-nutritive sweeteners (NNS) on body weight in subjects enrolled in a year-long behavioral weight loss treatment program. The study used a randomized equivalence design with NNS or water beverages as the main factor in a trial among 303 weight-stable people with overweight and obesity. All participants participated in a weight loss program plus assignment to consume 24 ounces (710 ml) of water or NNS beverages daily for 1 year. NNS and water treatments were non-equivalent, with NNS treatment showing greater weight loss at the end of 1 year. At 1 year subjects receiving water had maintained a 2.45 ± 5.59 kg weight loss while those receiving NNS beverages maintained a loss of 6.21 ± 7.65 kg (P < 0.001 for difference). Water and NNS beverages were not equivalent for weight loss and maintenance during a 1-year behavioral treatment program. NNS beverages were superior for weight loss and weight maintenance in a population consisting of regular users of NNS beverages who either maintained or discontinued consumption of these beverages and consumed water during a structured weight loss program. These results suggest that NNS beverages can be an effective tool for weight loss and maintenance within the context of a weight management program. © 2015 The Authors, Obesity published by Wiley Periodicals, Inc. on behalf of The Obesity Society (TOS).
NASA Astrophysics Data System (ADS)
Liang, L. L.; Arcus, V. L.; Heskel, M.; O'Sullivan, O. S.; Weerasinghe, L. K.; Creek, D.; Egerton, J. J. G.; Tjoelker, M. G.; Atkin, O. K.; Schipper, L. A.
2017-12-01
Temperature is a crucial factor in determining the rates of ecosystem processes such as leaf respiration (R) - the flux of plant respired carbon dioxide (CO2) from leaves to the atmosphere. Generally, respiration rate increases exponentially with temperature as modelled by the Arrhenius equation, but a recent study (Heskel et al., 2016) showed a universally convergent temperature response of R using an empirical exponential/polynomial model whereby the exponent in the Arrhenius model is replaced by a quadratic function of temperature. The exponential/polynomial model has been used elsewhere to describe shoot respiration and plant respiration. What are the principles that underlie these empirical observations? Here, we demonstrate that macromolecular rate theory (MMRT), based on transition state theory for chemical kinetics, is equivalent to the exponential/polynomial model. We re-analyse the data from Heskel et al. 2016 using MMRT to show this equivalence and thus, provide an explanation based on thermodynamics, for the convergent temperature response of R. Using statistical tools, we also show the equivalent explanatory power of MMRT when compared to the exponential/polynomial model and the superiority of both of these models over the Arrhenius function. Three meaningful parameters emerge from MMRT analysis: the temperature at which the rate of respiration is maximum (the so called optimum temperature, Topt), the temperature at which the respiration rate is most sensitive to changes in temperature (the inflection temperature, Tinf) and the overall curvature of the log(rate) versus temperature plot (the so called change in heat capacity for the system, ). The latter term originates from the change in heat capacity between an enzyme-substrate complex and an enzyme transition state complex in enzyme-catalysed metabolic reactions. From MMRT, we find the average Topt and Tinf of R are 67.0±1.2 °C and 41.4±0.7 °C across global sites. The average curvature (average negative) is -1.2±0.1 kJ.mol-1K-1. MMRT extends the classic transition state theory to enzyme-catalysed reactions and scales up to more complex processes including micro-organism growth rates and ecosystem processes.
Rossi, G P; Seccia, T M; Miotto, D; Zucchetta, P; Cecchin, D; Calò, L; Puato, M; Motta, R; Caielli, P; Vincenzi, M; Ramondo, G; Taddei, S; Ferri, C; Letizia, C; Borghi, C; Morganti, A; Pessina, A C
2012-08-01
It is unclear whether revascularization of renal artery stenosis (RAS) by means of percutaneous renal angioplasty and stenting (PTRAS) is advantageous over optimal medical therapy. Hence, we designed a randomized clinical trial based on an optimized patient selection strategy and hard experimental endpoints. Primary objective of this study is to determine whether PTRAS is superior or equivalent to optimal medical treatment for preserving glomerular filtration rate (GFR) in the ischemic kidney as assessed by 99mTcDTPA sequential renal scintiscan. Secondary objectives of this study are to establish whether the two treatments are equivalent in lowering blood pressure, preserving overall renal function and regressing target organ damage, preventing cardiovascular events and improving quality of life. The study is designed as a prospective multicentre randomized, un-blinded two-arm study. Eligible patients will have clinical and angio-CT evidence of RAS. Inclusion criteria is RAS affecting the main renal artery or its major branches either >70% or, if <70, with post-stenotic dilatation. Renal function will be assessed with 99mTc-DTPA renal scintigraphy. Patients will be randomized to either arms considering both resistance index value in the ischemic kidney and the presence of unilateral/bilateral stenosis. Primary experimental endpoint will be the GFR of the ischemic kidney, assessed as quantitative variable by 99TcDTPA, and the loss of ischemic kidney defined as a categorical variable.
Nakajima, Takuya; Roggia, Murilo F; Noda, Yasuo; Ueta, Takashi
2015-09-01
To evaluate the effect of internal limiting membrane (ILM) peeling during vitrectomy for diabetic macular edema. MEDLINE, EMBASE, and CENTRAL were systematically reviewed. Eligible studies included randomized or nonrandomized studies that compared surgical outcomes of vitrectomy with or without ILM peeling for diabetic macular edema. The primary and secondary outcome measures were postoperative best-corrected visual acuity and central macular thickness. Meta-analysis on mean differences between vitrectomy with and without ILM peeling was performed using inverse variance method in random effects. Five studies (7 articles) with 741 patients were eligible for analysis. Superiority (95% confidence interval) in postoperative best-corrected visual acuity in ILM peeling group compared with nonpeeling group was 0.04 (-0.05 to 0.13) logMAR (equivalent to 2.0 ETDRS letters, P = 0.37), and superiority in best-corrected visual acuity change in ILM peeling group was 0.04 (-0.02 to 0.09) logMAR (equivalent to 2.0 ETDRS letters, P = 0.16). There was no significant difference in postoperative central macular thickness and central macular thickness reduction between the two groups. The visual acuity outcomes using pars plana vitrectomy with ILM peeling versus no ILM peeling were not significantly different. A larger randomized prospective study would be necessary to adequately address the effectiveness of ILM peeling on visual acuity outcomes.
Cooperation evolution in random multiplicative environments
NASA Astrophysics Data System (ADS)
Yaari, G.; Solomon, S.
2010-02-01
Most real life systems have a random component: the multitude of endogenous and exogenous factors influencing them result in stochastic fluctuations of the parameters determining their dynamics. These empirical systems are in many cases subject to noise of multiplicative nature. The special properties of multiplicative noise as opposed to additive noise have been noticed for a long while. Even though apparently and formally the difference between free additive vs. multiplicative random walks consists in just a move from normal to log-normal distributions, in practice the implications are much more far reaching. While in an additive context the emergence and survival of cooperation requires special conditions (especially some level of reward, punishment, reciprocity), we find that in the multiplicative random context the emergence of cooperation is much more natural and effective. We study the various implications of this observation and its applications in various contexts.
Berry, Valerie; Thorburn, Christine E.; Knott, Sarah J.; Woodnutt, Gary
1998-01-01
Comparative antibacterial efficacies of erythromycin, clarithromycin, and azithromycin were examined against Streptococcus pneumoniae and Haemophilus influenzae, with amoxicillin-clavulanate used as the active control. In vitro, the macrolides at twice their MICs and at concentrations achieved in humans were bacteriostatic or reduced the numbers of viable S. pneumoniae slowly, whereas amoxicillin-clavulanate showed a rapid antibacterial effect. Against H. influenzae, erythromycin, clarithromycin, and clarithromycin plus 14-hydroxy clarithromycin at twice their MICs produced a slow reduction in bacterial numbers, whereas azithromycin was bactericidal. Azithromycin at the concentrations achieved in the serum of humans was bacteriostatic, whereas erythromycin and clarithromycin were ineffective. In experimental respiratory tract infections in rats, clarithromycin (equivalent to 250 mg twice daily [b.i.d.]) and amoxicillin-clavulanate (equivalent to 500 plus 125 mg b.i.d., respectively) were highly effective against S. pneumoniae, but azithromycin (equivalent to 500 and 250 mg once daily) was significantly less effective (P < 0.01). Against H. influenzae, clarithromycin treatment (equivalent to 250 or 500 mg b.i.d.) was similar to no treatment and was significantly less effective than amoxicillin-clavulanate treatment (P < 0.01). Azithromycin demonstrated significant in vivo activity (P < 0.05) but was significantly less effective than amoxicillin-clavulanate (P < 0.05). Overall, amoxicillin-clavulanate was effective in vitro and in vivo. Clarithromycin and erythromycin were ineffective in vitro and in vivo against H. influenzae, and azithromycin (at concentrations achieved in humans) showed unreliable activity against both pathogens. These results may have clinical implications for the utility of macrolides in the empiric therapy of respiratory tract infections. PMID:9835514
NASA Technical Reports Server (NTRS)
Childs, D. W.
1983-01-01
An improved theory for the prediction of the rotordynamic coefficients of turbulent annular seals was developed. Predictions from the theory are compared to the experimental results and an approach for the direct calculation of empirical turbulent coefficients from test data are introduced. An improved short seal solution is shown to do a better job of calculating effective stiffness and damping coefficients than either the original short seal solution or a finite length solution. However, the original short seal solution does a much better job of predicting equivalent added mass coefficient.
Culture in the cockpit: do Hofstede's dimensions replicate?
NASA Technical Reports Server (NTRS)
Merritt, A.; Helmreich, R. L. (Principal Investigator)
2000-01-01
Survey data collected from 9,400 male commercial airline pilots in 19 countries were used in a replication study of Hofstede's indexes of national culture. The analysis that removed the constraint of item equivalence proved superior, both conceptually and empirically, to the analysis using Hofstede's items and formulae as prescribed, and rendered significant replication correlations for all indexes (Individualism-Collectivism .96, Power Distance .87, Masculinity-Femininity .75, and Uncertainty Avoidance .68). The successful replication confirms that national culture exerts an influence on cockpit behavior over and above the professional culture of pilots, and that "one size fits all" training is inappropriate.
Analysis of the effect of numbers of aircraft operations on community annoyance
NASA Technical Reports Server (NTRS)
Connor, W. K.; Patterson, H. P.
1976-01-01
The general validity of the equivalent-energy concept as applied to community annoyance to aircraft noise has been recently questioned by investigators using a peak-dBA concept. Using data previously gathered around nine U.S. airports, empirical tests of both concepts are presented. Results show that annoyance response follows neither concept, that annoyance increases steadily with energy-mean level for constant daily operations and with numbers of operations up to 100-199 per day (then decreases for higher numbers), and that the behavior of certain response descriptors is dependent upon the statistical distributions of numbers and levels.
1980-02-01
implemented to test ANSI FORTRAN set D3. Using theorem 6 we then have programs. In building real testing tools for Theorem 18 : The recursion constructors...constants, scalar in theorems 10, 15, 16, and 18 , then Q must be variables, and array references) times the number equivalent to P. of unique data...for j,,rd1s longer thlan a fixed .1; 0. erot 2., .12.’Ie 1). Ullman2. li21122 arnd isolates and plrints each telegram along hI 2 .. 222.2.J~12.2.1 It
Johansen, Mette Yun; MacDonald, Christopher Scott; Hansen, Katrine Bagge; Karstoft, Kristian; Christensen, Robin; Pedersen, Maria; Hansen, Louise Seier; Zacho, Morten; Wedell-Neergaard, Anne-Sophie; Nielsen, Signe Tellerup; Iepsen, Ulrik Wining; Langberg, Henning; Vaag, Allan Arthur; Pedersen, Bente Klarlund; Ried-Larsen, Mathias
2017-08-15
It is unclear whether a lifestyle intervention can maintain glycemic control in patients with type 2 diabetes. To test whether an intensive lifestyle intervention results in equivalent glycemic control compared with standard care and, secondarily, leads to a reduction in glucose-lowering medication in participants with type 2 diabetes. Randomized, assessor-blinded, single-center study within Region Zealand and the Capital Region of Denmark (April 2015-August 2016). Ninety-eight adult participants with non-insulin-dependent type 2 diabetes who were diagnosed for less than 10 years were included. Participants were randomly assigned (2:1; stratified by sex) to the lifestyle group (n = 64) or the standard care group (n = 34). All participants received standard care with individual counseling and standardized, blinded, target-driven medical therapy. Additionally, the lifestyle intervention included 5 to 6 weekly aerobic training sessions (duration 30-60 minutes), of which 2 to 3 sessions were combined with resistance training. The lifestyle participants received dietary plans aiming for a body mass index of 25 or less. Participants were followed up for 12 months. Primary outcome was change in hemoglobin A1c (HbA1c) from baseline to 12-month follow-up, and equivalence was prespecified by a CI margin of ±0.4% based on the intention-to-treat population. Superiority analysis was performed on the secondary outcome reductions in glucose-lowering medication. Among 98 randomized participants (mean age, 54.6 years [SD, 8.9]; women, 47 [48%]; mean baseline HbA1c, 6.7%), 93 participants completed the trial. From baseline to 12-month follow-up, the mean HbA1c level changed from 6.65% to 6.34% in the lifestyle group and from 6.74% to 6.66% in the standard care group (mean between-group difference in change of -0.26% [95% CI, -0.52% to -0.01%]), not meeting the criteria for equivalence (P = .15). Reduction in glucose-lowering medications occurred in 47 participants (73.5%) in the lifestyle group and 9 participants (26.4%) in the standard care group (difference, 47.1 percentage points [95% CI, 28.6-65.3]). There were 32 adverse events (most commonly musculoskeletal pain or discomfort and mild hypoglycemia) in the lifestyle group and 5 in the standard care group. Among adults with type 2 diabetes diagnosed for less than 10 years, a lifestyle intervention compared with standard care resulted in a change in glycemic control that did not reach the criterion for equivalence, but was in a direction consistent with benefit. Further research is needed to assess superiority, as well as generalizability and durability of findings. clinicaltrials.gov Identifier: NCT02417012.
Reproduction of exact solutions of Lipkin model by nonlinear higher random-phase approximation
NASA Astrophysics Data System (ADS)
Terasaki, J.; Smetana, A.; Šimkovic, F.; Krivoruchenko, M. I.
2017-10-01
It is shown that the random-phase approximation (RPA) method with its nonlinear higher generalization, which was previously considered as approximation except for a very limited case, reproduces the exact solutions of the Lipkin model. The nonlinear higher RPA is based on an equation nonlinear on eigenvectors and includes many-particle-many-hole components in the creation operator of the excited states. We demonstrate the exact character of solutions analytically for the particle number N = 2 and numerically for N = 8. This finding indicates that the nonlinear higher RPA is equivalent to the exact Schrödinger equation.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
Pontes, Caridad; Gratacós, Jordi; Torres, Ferran; Avendaño, Cristina; Sanz, Jesús; Vallano, Antoni; Juanola, Xavier; de Miguel, Eugenio; Sanmartí, Raimon; Calvo, Gonzalo
2015-08-20
Dose reduction schedules of tumor necrosis factor antagonists (anti-TNF) as maintenance therapy in patients with spondyloarthritis are used empirically in clinical practice, despite the lack of clinical trials providing evidence for this practice. To address this issue the Spanish Society of Rheumatology (SER) and Spanish Society of Clinical Pharmacology (SEFC) designed a 3-year multicenter, randomized, open-label, controlled clinical trial (2 years for inclusion and 1 year of follow-up). The study is expected to include 190 patients with axial spondyloarthritis on stable maintenance treatment (≥4 months) with any anti-TNF agent at doses recommended in the summary of product characteristics. Patients will be randomized to either a dose reduction arm or maintenance of the dosing regimen as per the official labelling recommendations. Randomization will be stratified according to the anti-TNF agent received before study inclusion. Patient follow-up, visit schedule, and examinations will be maintained as per normal clinical practice recommendations according to SER guidelines. The study aims to test the hypothesis of noninferiority of the dose reduction strategy compared with standard treatment. The first patients were recruited in July 2012, and study completion is scheduled for the end of April 2015. The REDES-TNF study is a pragmatic clinical trial that aims to provide evidence to support a medical decision now made empirically. The study results may help inform clinical decisions relevant to both patients and healthcare decision makers. EudraCT 2011-005871-18 (21 December 2011).
Pathwise upper semi-continuity of random pullback attractors along the time axis
NASA Astrophysics Data System (ADS)
Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke
2018-07-01
The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.
Storch, Eric A; Lewin, Adam B; Collier, Amanda B; Arnold, Elysse; De Nadai, Alessandro S; Dane, Brittney F; Nadeau, Joshua M; Mutch, P Jane; Murphy, Tanya K
2015-03-01
Examine the efficacy of a personalized, modular cognitive-behavioral therapy (CBT) protocol among early adolescents with high-functioning autism spectrum disorders (ASDs) and co-occurring anxiety relative to treatment as usual (TAU). Thirty-one children (11-16 years) with ASD and clinically significant anxiety were randomly assigned to receive 16 weekly CBT sessions or an equivalent duration of TAU. Participants were assessed by blinded raters at screening, posttreatment, and 1-month follow-up. Youth randomized to CBT demonstrated superior improvement across primary outcomes relative to those receiving TAU. Eleven of 16 adolescents randomized to CBT were treatment responders, versus 4 of 15 in the TAU condition. Gains were maintained at 1-month follow-up for CBT responders. These data extend findings of the promising effects of CBT in anxious youth with ASD to early adolescents. © 2014 Wiley Periodicals, Inc.
Random vibration analysis of space flight hardware using NASTRAN
NASA Technical Reports Server (NTRS)
Thampi, S. K.; Vidyasagar, S. N.
1990-01-01
During liftoff and ascent flight phases, the Space Transportation System (STS) and payloads are exposed to the random acoustic environment produced by engine exhaust plumes and aerodynamic disturbances. The analysis of payloads for randomly fluctuating loads is usually carried out using the Miles' relationship. This approximation technique computes an equivalent load factor as a function of the natural frequency of the structure, the power spectral density of the excitation, and the magnification factor at resonance. Due to the assumptions inherent in Miles' equation, random load factors are often over-estimated by this approach. In such cases, the estimates can be refined using alternate techniques such as time domain simulations or frequency domain spectral analysis. Described here is the use of NASTRAN to compute more realistic random load factors through spectral analysis. The procedure is illustrated using Spacelab Life Sciences (SLS-1) payloads and certain unique features of this problem are described. The solutions are compared with Miles' results in order to establish trends at over or under prediction.
A high speed implementation of the random decrement algorithm
NASA Technical Reports Server (NTRS)
Kiraly, L. J.
1982-01-01
The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.
ERIC Educational Resources Information Center
Quinn, David M.; Kim, James S.
2018-01-01
Theory and empirical work suggest that teachers' social capital influences school improvement efforts. Social ties are prerequisite for social capital, yet little causal evidence exists on how malleable factors, such as instructional management approaches, affect teachers' ties. In this cluster-randomized trial, we apply a decision-making…
ERIC Educational Resources Information Center
Ford, Julian D.; Steinberg, Karen L.; Zhang, Wanli
2011-01-01
Addressing affect dysregulation may provide a complementary alternative or adjunctive approach to the empirically supported trauma memory processing models of cognitive behavior therapy (CBT) for posttraumatic stress disorder (PTSD). A CBT designed to enhance affect regulation without trauma memory processing--trauma affect regulation: guide for…
Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development
ERIC Educational Resources Information Center
Kelcey, Ben; Spybrook, Jessaca; Phelps, Geoffrey; Jones, Nathan; Zhang, Jiaqi
2017-01-01
We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying…
Recreational Prescription Drug Use among College Students
ERIC Educational Resources Information Center
Kolek, Ethan A.
2009-01-01
The purpose of this study was to explore recreational prescription drug use among undergraduate students. Although anecdotal accounts on this subject abound, empirical research is extremely limited. Data from a survey of a random sample of 734 students at a large public research university in the Northeast were examined. Results indicate that a…
Does the First Week of Class Matter? A Quasi-Experimental Investigation of Student Satisfaction
ERIC Educational Resources Information Center
Hermann, Anthony D.; Foster, David A.; Hardin, Erin E.
2010-01-01
Teaching experts suggest that establishing clear expectations and a supportive environment at the beginning of a college course has a lasting impact on student attitudes. However, minimal empirical evidence exists to support these suggestions. Consequently, we randomly assigned instructors to either begin their course with a reciprocal interview…
Peak Experiences: Some Empirical Tests.
ERIC Educational Resources Information Center
Wuthnow, Robert
1978-01-01
This article presents findings regarding peak experiences from a systematic random sample of 1,000 persons in the San Francisco-Oakland area. Evidence is presented on the incidence of peak experiences, on the kinds of life styles which tend to be associated with these experiences, and on some of the social implications that these experiences have.…
Applying Generalizability Theory To Evaluate Treatment Effect in Single-Subject Research.
ERIC Educational Resources Information Center
Lefebvre, Daniel J.; Suen, Hoi K.
An empirical investigation of methodological issues associated with evaluating treatment effect in single-subject research (SSR) designs is presented. This investigation: (1) conducted a generalizability (G) study to identify the sources of systematic and random measurement error (SRME); (2) used an analytic approach based on G theory to integrate…
Community of Inquiry Method and Language Skills Acquisition: Empirical Evidence
ERIC Educational Resources Information Center
Preece, Abdul Shakhour Duncan
2015-01-01
The study investigates the effectiveness of community of inquiry method in preparing students to develop listening and speaking skills in a sample of junior secondary school students in Borno state, Nigeria. A sample of 100 students in standard classes was drawn in one secondary school in Maiduguri metropolis through stratified random sampling…
Digital Print Concepts: Conceptualizing a Modern Framework for Measuring Emerging Knowledge
ERIC Educational Resources Information Center
Javorsky, Kristin H.
2014-01-01
This dissertation sought to produce and empirically test a theoretical model for the literacy construct of print concepts that would take into account the unique affordances of digital picture books for emergent readers. The author used an exploratory study of twenty randomly selected digital story applications to identify print conventions, text…
Analyzing Empirical Evaluations of Non-Experimental Methods in Field Settings
ERIC Educational Resources Information Center
Steiner, Peter M.; Wong, Vivian
2016-01-01
Despite recent emphasis on the use of randomized control trials (RCTs) for evaluating education interventions, in most areas of education research, observational methods remain the dominant approach for assessing program effects. Over the last three decades, the within-study comparison (WSC) design has emerged as a method for evaluating the…
ERIC Educational Resources Information Center
Kelly, Robert F.; Ramsey, Sarah H.
1985-01-01
Evaluated legal representation for children in child protection proceedings, using data from a random sample of cases and attorneys in North Carolina. Results showed the attorneys, in general, produced no significant benefits for the children. Based on results, identifies particular types of effective attorneys. (NRB)
The Evolution, Design and Implementation of the Minds in Motion Curriculum
ERIC Educational Resources Information Center
Cottone, Elizabeth; Chen, Wei-Bing; Brock, Laura
2013-01-01
Building on the empirical work of the previous two studies, this paper describes the development of the Minds In Motion curriculum (MIM), as well as the setting and circumstances of a randomized controlled trial conducted to evaluate this intervention. Throughout this paper the authors emphasize the benefits and challenges of assembling an…
ERIC Educational Resources Information Center
Luby, Joan; Lenze, Shannon; Tillman, Rebecca
2012-01-01
Background: Validation for depression in preschool children has been established; however, to date no empirical investigations of interventions for the early onset disorder have been conducted. Based on this and the modest efficacy of available treatments for childhood depression, the need for novel early interventions has been emphasized. Large…
The Richer, the Happier? An Empirical Investigation in Selected European Countries
ERIC Educational Resources Information Center
Seghieri, Chiara; Desantis, Gustavo; Tanturri, Maria Letizia
2006-01-01
This study analyses the relationship between subjective and objective measures of well-being in selected European countries using the data of the European Community Household Panel (ECHP). In the first part of the paper, we develop a random-effect ordered probit model, separately for each country, relating the subjective measure of income…
Observing and Deterring Social Cheating on College Exams
ERIC Educational Resources Information Center
Fendler, Richard J.; Yates, Michael C.; Godbey, Johnathan M.
2018-01-01
This research introduces a unique multiple choice exam design to observe and measure the degree to which students copy answers from their peers. Using data collected from the exam, an empirical experiment is conducted to determine whether random seat assignment deters cheating relative to a control group of students allowed to choose their seats.…
ERIC Educational Resources Information Center
Stockard, Jean; Wood, Timothy W.
2017-01-01
Most evaluators have embraced the goal of evidence-based practice (EBP). Yet, many have criticized EBP review systems that prioritize randomized control trials and use various criteria to limit the studies examined. They suggest this could produce policy recommendations based on small, unrepresentative segments of the literature and recommend a…
Watanabe, Hayafumi; Sano, Yukie; Takayasu, Hideki; Takayasu, Misako
2016-11-01
To elucidate the nontrivial empirical statistical properties of fluctuations of a typical nonsteady time series representing the appearance of words in blogs, we investigated approximately 3×10^{9} Japanese blog articles over a period of six years and analyze some corresponding mathematical models. First, we introduce a solvable nonsteady extension of the random diffusion model, which can be deduced by modeling the behavior of heterogeneous random bloggers. Next, we deduce theoretical expressions for both the temporal and ensemble fluctuation scalings of this model, and demonstrate that these expressions can reproduce all empirical scalings over eight orders of magnitude. Furthermore, we show that the model can reproduce other statistical properties of time series representing the appearance of words in blogs, such as functional forms of the probability density and correlations in the total number of blogs. As an application, we quantify the abnormality of special nationwide events by measuring the fluctuation scalings of 1771 basic adjectives.
Backward jump continuous-time random walk: An application to market trading
NASA Astrophysics Data System (ADS)
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Modeling stock price dynamics by continuum percolation system and relevant complex systems analysis
NASA Astrophysics Data System (ADS)
Xiao, Di; Wang, Jun
2012-10-01
The continuum percolation system is developed to model a random stock price process in this work. Recent empirical research has demonstrated various statistical features of stock price changes, the financial model aiming at understanding price fluctuations needs to define a mechanism for the formation of the price, in an attempt to reproduce and explain this set of empirical facts. The continuum percolation model is usually referred to as a random coverage process or a Boolean model, the local interaction or influence among traders is constructed by the continuum percolation, and a cluster of continuum percolation is applied to define the cluster of traders sharing the same opinion about the market. We investigate and analyze the statistical behaviors of normalized returns of the price model by some analysis methods, including power-law tail distribution analysis, chaotic behavior analysis and Zipf analysis. Moreover, we consider the daily returns of Shanghai Stock Exchange Composite Index from January 1997 to July 2011, and the comparisons of return behaviors between the actual data and the simulation data are exhibited.
Hedeker, D; Flay, B R; Petraitis, J
1996-02-01
Methods are proposed and described for estimating the degree to which relations among variables vary at the individual level. As an example of the methods, M. Fishbein and I. Ajzen's (1975; I. Ajzen & M. Fishbein, 1980) theory of reasoned action is examined, which posits first that an individual's behavioral intentions are a function of 2 components: the individual's attitudes toward the behavior and the subjective norms as perceived by the individual. A second component of their theory is that individuals may weight these 2 components differently in assessing their behavioral intentions. This article illustrates the use of empirical Bayes methods based on a random-effects regression model to estimate these individual influences, estimating an individual's weighting of both of these components (attitudes toward the behavior and subjective norms) in relation to their behavioral intentions. This method can be used when an individual's behavioral intentions, subjective norms, and attitudes toward the behavior are all repeatedly measured. In this case, the empirical Bayes estimates are derived as a function of the data from the individual, strengthened by the overall sample data.
Rates of profit as correlated sums of random variables
NASA Astrophysics Data System (ADS)
Greenblatt, R. E.
2013-10-01
Profit realization is the dominant feature of market-based economic systems, determining their dynamics to a large extent. Rather than attaining an equilibrium, profit rates vary widely across firms, and the variation persists over time. Differing definitions of profit result in differing empirical distributions. To study the statistical properties of profit rates, I used data from a publicly available database for the US Economy for 2009-2010 (Risk Management Association). For each of three profit rate measures, the sample space consists of 771 points. Each point represents aggregate data from a small number of US manufacturing firms of similar size and type (NAICS code of principal product). When comparing the empirical distributions of profit rates, significant ‘heavy tails’ were observed, corresponding principally to a number of firms with larger profit rates than would be expected from simple models. An apparently novel correlated sum of random variables statistical model was used to model the data. In the case of operating and net profit rates, a number of firms show negative profits (losses), ruling out simple gamma or lognormal distributions as complete models for these data.
Morrissey, C Orla; Chen, Sharon C-A; Sorrell, Tania C; Bradstock, Kenneth F; Szer, Jeffrey; Halliday, Catriona L; Gilroy, Nicole M; Schwarer, Anthony P; Slavin, Monica A
2011-02-01
Invasive aspergillosis (IA) is a major cause of mortality in patients with hematological malignancies, due largely to the inability of traditional culture and biopsy methods to make an early or accurate diagnosis. Diagnostic accuracy studies suggest that Aspergillus galactomannan (GM) enzyme immunoassay (ELISA) and Aspergillus PCR-based methods may overcome these limitations, but their impact on patient outcomes should be evaluated in a diagnostic randomized controlled trial (D-RCT). This article describes the methodology of a D-RCT which compares a new pre-emptive strategy (GM-ELISA- and Aspergillus PCR-driven antifungal therapy) with the standard fever-driven empiric antifungal treatment strategy. Issues including primary end-point and patient selection, duration of screening, choice of tests for the pre-emptive strategy, antifungal prophylaxis and bias control, which were considered in the design of the trial, are discussed. We suggest that the template presented herein is considered by researchers when evaluating the utility of new diagnostic tests (ClinicalTrials.gov number, NCT00163722).
Etzioni, Ruth; Gulati, Roman
2013-04-01
In our article about limitations of basing screening policy on screening trials, we offered several examples of ways in which modeling, using data from large screening trials and population trends, provided insights that differed somewhat from those based only on empirical trial results. In this editorial, we take a step back and consider the general question of whether randomized screening trials provide the strongest evidence for clinical guidelines concerning population screening programs. We argue that randomized trials provide a process that is designed to protect against certain biases but that this process does not guarantee that inferences based on empirical results from screening trials will be unbiased. Appropriate quantitative methods are key to obtaining unbiased inferences from screening trials. We highlight several studies in the statistical literature demonstrating that conventional survival analyses of screening trials can be misleading and list a number of key questions concerning screening harms and benefits that cannot be answered without modeling. Although we acknowledge the centrality of screening trials in the policy process, we maintain that modeling constitutes a powerful tool for screening trial interpretation and screening policy development.
Backward jump continuous-time random walk: an application to market trading.
Gubiec, Tomasz; Kutner, Ryszard
2010-10-01
The backward jump modification of the continuous-time random walk model or the version of the model driven by the negative feedback was herein derived for spatiotemporal continuum in the context of a share price evolution on a stock exchange. In the frame of the model, we described stochastic evolution of a typical share price on a stock exchange with a moderate liquidity within a high-frequency time scale. The model was validated by satisfactory agreement of the theoretical velocity autocorrelation function with its empirical counterpart obtained for the continuous quotation. This agreement is mainly a result of a sharp backward correlation found and considered in this article. This correlation is a reminiscence of such a bid-ask bounce phenomenon where backward price jump has the same or almost the same length as preceding jump. We suggested that this correlation dominated the dynamics of the stock market with moderate liquidity. Although assumptions of the model were inspired by the market high-frequency empirical data, its potential applications extend beyond the financial market, for instance, to the field covered by the Le Chatelier-Braun principle of contrariness.
Understanding spatial connectivity of individuals with non-uniform population density.
Wang, Pu; González, Marta C
2009-08-28
We construct a two-dimensional geometric graph connecting individuals placed in space within a given contact distance. The individuals are distributed using a measured country's density of population. We observe that while large clusters (group of individuals connected) emerge within some regions, they are trapped in detached urban areas owing to the low population density of the regions bordering them. To understand the emergence of a giant cluster that connects the entire population, we compare the empirical geometric graph with the one generated by placing the same number of individuals randomly in space. We find that, for small contact distances, the empirical distribution of population dominates the growth of connected components, but no critical percolation transition is observed in contrast to the graph generated by a random distribution of population. Our results show that contact distances from real-world situations as for WIFI and Bluetooth connections drop in a zone where a fully connected cluster is not observed, hinting that human mobility must play a crucial role in contact-based diseases and wireless viruses' large-scale spreading.
Hosseinipour, Mina C; Bisson, Gregory P; Miyahara, Sachiko; Sun, Xin; Moses, Agnes; Riviere, Cynthia; Kirui, Fredrick K; Badal-Faesen, Sharlaa; Lagat, David; Nyirenda, Mulinda; Naidoo, Kogieleum; Hakim, James; Mugyenyi, Peter; Henostroza, German; Leger, Paul D; Lama, Javier R; Mohapi, Lerato; Alave, Jorge; Mave, Vidya; Veloso, Valdilea G; Pillay, Sandy; Kumarasamy, Nagalingeswaran; Bao, Jing; Hogg, Evelyn; Jones, Lynne; Zolopa, Andrew; Kumwenda, Johnstone; Gupta, Amita
2016-03-19
Mortality within the first 6 months after initiating antiretroviral therapy is common in resource-limited settings and is often due to tuberculosis in patients with advanced HIV disease. Isoniazid preventive therapy is recommended in HIV-positive adults, but subclinical tuberculosis can be difficult to diagnose. We aimed to assess whether empirical tuberculosis treatment would reduce early mortality compared with isoniazid preventive therapy in high-burden settings. We did a multicountry open-label randomised clinical trial comparing empirical tuberculosis therapy with isoniazid preventive therapy in HIV-positive outpatients initiating antiretroviral therapy with CD4 cell counts of less than 50 cells per μL. Participants were recruited from 18 outpatient research clinics in ten countries (Malawi, South Africa, Haiti, Kenya, Zambia, India, Brazil, Zimbabwe, Peru, and Uganda). Individuals were screened for tuberculosis using a symptom screen, locally available diagnostics, and the GeneXpert MTB/RIF assay when available before inclusion. Study candidates with confirmed or suspected tuberculosis were excluded. Inclusion criteria were liver function tests 2·5 times the upper limit of normal or less, a creatinine clearance of at least 30 mL/min, and a Karnofsky score of at least 30. Participants were randomly assigned (1:1) to either the empirical group (antiretroviral therapy and empirical tuberculosis therapy) or the isoniazid preventive therapy group (antiretroviral therapy and isoniazid preventive therapy). The primary endpoint was survival (death or unknown status) at 24 weeks after randomisation assessed in the intention-to-treat population. Kaplan-Meier estimates of the primary endpoint across groups were compared by the z-test. All participants were included in the safety analysis of antiretroviral therapy and tuberculosis treatment. This trial is registered with ClinicalTrials.gov, number NCT01380080. Between Oct 31, 2011, and June 9, 2014, we enrolled 850 participants. Of these, we randomly assigned 424 to receive empirical tuberculosis therapy and 426 to the isoniazid preventive therapy group. The median CD4 cell count at baseline was 18 cells per μL (IQR 9-32). At week 24, 22 (5%) participants from each group died or were of unknown status (95% CI 3·5-7·8) for empirical group and for isoniazid preventive therapy (95% CI 3·4-7·8); absolute risk difference of -0·06% (95% CI -3·05 to 2·94). Grade 3 or 4 signs or symptoms occurred in 50 (12%) participants in the empirical group and 46 (11%) participants in the isoniazid preventive therapy group. Grade 3 or 4 laboratory abnormalities occurred in 99 (23%) participants in the empirical group and 97 (23%) participants in the isoniazid preventive therapy group. Empirical tuberculosis therapy did not reduce mortality at 24 weeks compared with isoniazid preventive therapy in outpatient adults with advanced HIV disease initiating antiretroviral therapy. The low mortality rate of the trial supports implementation of systematic tuberculosis screening and isoniazid preventive therapy in outpatients with advanced HIV disease. National Institutes of Allergy and Infectious Diseases through the AIDS Clinical Trials Group. Copyright © 2016 Elsevier Ltd. All rights reserved.
The Hard but Necessary Task of Gathering Order-One Effect Size Indices in Meta-Analysis
ERIC Educational Resources Information Center
Ortego, Carmen; Botella, Juan
2010-01-01
Meta-analysis of studies with two groups and two measurement occasions must employ order-one effect size indices to represent study outcomes. Especially with non-random assignment, non-equivalent control group designs, a statistical analysis restricted to post-treatment scores can lead to severely biased conclusions. The 109 primary studies…