Statistical analysis of loopy belief propagation in random fields
NASA Astrophysics Data System (ADS)
Yasuda, Muneki; Kataoka, Shun; Tanaka, Kazuyuki
2015-10-01
Loopy belief propagation (LBP), which is equivalent to the Bethe approximation in statistical mechanics, is a message-passing-type inference method that is widely used to analyze systems based on Markov random fields (MRFs). In this paper, we propose a message-passing-type method to analytically evaluate the quenched average of LBP in random fields by using the replica cluster variation method. The proposed analytical method is applicable to general pairwise MRFs with random fields whose distributions differ from each other and can give the quenched averages of the Bethe free energies over random fields, which are consistent with numerical results. The order of its computational cost is equivalent to that of standard LBP. In the latter part of this paper, we describe the application of the proposed method to Bayesian image restoration, in which we observed that our theoretical results are in good agreement with the numerical results for natural images.
Electromagnetic backscattering from a random distribution of lossy dielectric scatterers
NASA Technical Reports Server (NTRS)
Lang, R. H.
1980-01-01
Electromagnetic backscattering from a sparse distribution of discrete lossy dielectric scatterers occupying a region 5 was studied. The scatterers are assumed to have random position and orientation. Scattered fields are calculated by first finding the mean field and then by using it to define an equivalent medium within the volume 5. The scatterers are then viewed as being embedded in the equivalent medium; the distorted Born approximation is then used to find the scattered fields. This technique represents an improvement over the standard Born approximation since it takes into account the attenuation of the incident and scattered waves in the equivalent medium. The method is used to model a leaf canopy when the leaves are modeled by lossy dielectric discs.
Relationship of field and LiDAR estimates of forest canopy cover with snow accumulation and melt
Mariana Dobre; William J. Elliot; Joan Q. Wu; Timothy E. Link; Brandon Glaza; Theresa B. Jain; Andrew T. Hudak
2012-01-01
At the Priest River Experimental Forest in northern Idaho, USA, snow water equivalent (SWE) was recorded over a period of six years on random, equally-spaced plots in ~4.5 ha small watersheds (n=10). Two watersheds were selected as controls and eight as treatments, with two watersheds randomly assigned per treatment as follows: harvest (2007) followed by mastication (...
Outcomes of Parent Education Programs Based on Reevaluation Counseling
ERIC Educational Resources Information Center
Wolfe, Randi B.; Hirsch, Barton J.
2003-01-01
We report two studies in which a parent education program based on Reevaluation Counseling was field-tested on mothers randomly assigned to treatment groups or equivalent, no-treatment comparison groups. The goal was to evaluate the program's viability, whether there were measurable effects, whether those effects were sustained over time, and…
Coherent electromagnetic waves in the presence of a half space of randomly distributed scatterers
NASA Technical Reports Server (NTRS)
Karam, M. A.; Fung, A. K.
1988-01-01
The present investigation of coherent field propagation notes, upon solving the Foldy-Twersky integral equation for a half-space of small spherical scatterers illuminated by a plane wave at oblique incidence, that the coherent field for a horizontally-polarized incident wave exhibits reflectivity and transmissivity consistent with the Fresnel formula for an equivalent continuous effective medium. In the case of a vertically polarized incident wave, both the vertical and longitudinal waves obtained for the coherent field have reflectivities and transmissivities that do not agree with the Fresnel formula.
NASA Astrophysics Data System (ADS)
Ivliev, S. V.
2017-12-01
For calculation of short laser pulse absorption in metal the imaginary part of permittivity, which is simply related to the conductivity, is required. Currently to find the static and dynamic conductivity the Kubo-Greenwood formula is most commonly used. It describes the electromagnetic energy absorption in the one-electron approach. In the present study, this formula is derived directly from the expression for the permittivity expression in the random phase approximation, which in fact is equivalent to the method of the mean field. The detailed analysis of the role of electron-electron interaction in the calculation of the matrix elements of the velocity operator is given. It is shown that in the one-electron random phase approximation the single-particle conductive electron wave functions in the field of fixed ions should be used. The possibility of considering the exchange and correlation effects by means of an amendment to a local function field is discussed.
ERIC Educational Resources Information Center
Liao, Chi-Wen; Livingston, Samuel A.
2008-01-01
Randomly equivalent forms (REF) of tests in listening and reading for nonnative speakers of English were created by stratified random assignment of items to forms, stratifying on item content and predicted difficulty. The study included 50 replications of the procedure for each test. Each replication generated 2 REFs. The equivalence of those 2…
Sequential time interleaved random equivalent sampling for repetitive signal.
Zhao, Yijiu; Liu, Jingjing
2016-12-01
Compressed sensing (CS) based sampling techniques exhibit many advantages over other existing approaches for sparse signal spectrum sensing; they are also incorporated into non-uniform sampling signal reconstruction to improve the efficiency, such as random equivalent sampling (RES). However, in CS based RES, only one sample of each acquisition is considered in the signal reconstruction stage, and it will result in more acquisition runs and longer sampling time. In this paper, a sampling sequence is taken in each RES acquisition run, and the corresponding block measurement matrix is constructed using a Whittaker-Shannon interpolation formula. All the block matrices are combined into an equivalent measurement matrix with respect to all sampling sequences. We implemented the proposed approach with a multi-cores analog-to-digital converter (ADC), whose ADC cores are time interleaved. A prototype realization of this proposed CS based sequential random equivalent sampling method has been developed. It is able to capture an analog waveform at an equivalent sampling rate of 40 GHz while sampled at 1 GHz physically. Experiments indicate that, for a sparse signal, the proposed CS based sequential random equivalent sampling exhibits high efficiency.
A functional renormalization method for wave propagation in random media
NASA Astrophysics Data System (ADS)
Lamagna, Federico; Calzetta, Esteban
2017-08-01
We develop the exact renormalization group approach as a way to evaluate the effective speed of the propagation of a scalar wave in a medium with random inhomogeneities. We use the Martin-Siggia-Rose formalism to translate the problem into a non equilibrium field theory one, and then consider a sequence of models with a progressively lower infrared cutoff; in the limit where the cutoff is removed we recover the problem of interest. As a test of the formalism, we compute the effective dielectric constant of an homogeneous medium interspersed with randomly located, interpenetrating bubbles. A simple approximation to the renormalization group equations turns out to be equivalent to a self-consistent two-loops evaluation of the effective dielectric constant.
Statistics of partially-polarized fields: beyond the Stokes vector and coherence matrix
NASA Astrophysics Data System (ADS)
Charnotskii, Mikhail
2017-08-01
Traditionally, the partially-polarized light is characterized by the four Stokes parameters. Equivalent description is also provided by correlation tensor of the optical field. These statistics specify only the second moments of the complex amplitudes of the narrow-band two-dimensional electric field of the optical wave. Electric field vector of the random quasi monochromatic wave is a nonstationary oscillating two-dimensional real random variable. We introduce a novel statistical description of these partially polarized waves: the Period-Averaged Probability Density Function (PA-PDF) of the field. PA-PDF contains more information on the polarization state of the field than the Stokes vector. In particular, in addition to the conventional distinction between the polarized and depolarized components of the field PA-PDF allows to separate the coherent and fluctuating components of the field. We present several model examples of the fields with identical Stokes vectors and very distinct shapes of PA-PDF. In the simplest case of the nonstationary, oscillating normal 2-D probability distribution of the real electrical field and stationary 4-D probability distribution of the complex amplitudes, the newly-introduced PA-PDF is determined by 13 parameters that include the first moments and covariance matrix of the quadrature components of the oscillating vector field.
Equivalent Circuit for Magnetoelectric Read and Write Operations
NASA Astrophysics Data System (ADS)
Camsari, Kerem Y.; Faria, Rafatul; Hassan, Orchi; Sutton, Brian M.; Datta, Supriyo
2018-04-01
We describe an equivalent circuit model applicable to a wide variety of magnetoelectric phenomena and use spice simulations to benchmark this model against experimental data. We use this model to suggest a different mode of operation where the 1 and 0 states are represented not by states with net magnetization (like mx , my, or mz) but by different easy axes, quantitatively described by (mx2-my2), which switches from 0 to 1 through the write voltage. This change is directly detected as a read signal through the inverse effect. The use of (mx2-my2) to represent a bit is a radical departure from the standard convention of using the magnetization (m ) to represent information. We then show how the equivalent circuit can be used to build a device exhibiting tunable randomness and suggest possibilities for extending it to nonvolatile memory with read and write capabilities, without the use of external magnetic fields or magnetic tunnel junctions.
A method for determining the weak statistical stationarity of a random process
NASA Technical Reports Server (NTRS)
Sadeh, W. Z.; Koper, C. A., Jr.
1978-01-01
A method for determining the weak statistical stationarity of a random process is presented. The core of this testing procedure consists of generating an equivalent ensemble which approximates a true ensemble. Formation of an equivalent ensemble is accomplished through segmenting a sufficiently long time history of a random process into equal, finite, and statistically independent sample records. The weak statistical stationarity is ascertained based on the time invariance of the equivalent-ensemble averages. Comparison of these averages with their corresponding time averages over a single sample record leads to a heuristic estimate of the ergodicity of a random process. Specific variance tests are introduced for evaluating the statistical independence of the sample records, the time invariance of the equivalent-ensemble autocorrelations, and the ergodicity. Examination and substantiation of these procedures were conducted utilizing turbulent velocity signals.
Infusion of solutions of pre-irradiated components in rats.
Pappas, Georgina; Arnaud, Francoise; Haque, Ashraful; Kino, Tomoyuki; Facemire, Paul; Carroll, Erica; Auker, Charles; McCarron, Richard; Scultetus, Anke
2016-06-01
The objective of this study was to conduct a 14-day toxicology assessment for intravenous solutions prepared from irradiated resuscitation fluid components and sterile water. Healthy Sprague Dawley rats (7-10/group) were instrumented and randomized to receive one of the following Field IntraVenous Resuscitation (FIVR) or commercial fluids; Normal Saline (NS), Lactated Ringer's, 5% Dextrose in NS. Daily clinical observation, chemistry and hematology on days 1,7,14, and urinalysis on day 14 were evaluated for equivalence using a two sample t-test (p<0.05). A board-certified pathologist evaluated organ histopathology on day 14. Equivalence was established for all observation parameters, lactate, sodium, liver enzymes, creatinine, WBC and differential, and urinalysis values. Lack of equivalence for hemoglobin (p=0.055), pH (p=0.0955), glucose (p=0.0889), Alanine-Aminotransferase (p=0.1938), albumin (p=0.1311), and weight (p=0.0555, p=0.1896), was deemed not clinically relevant due to means within physiologically normal ranges. Common microscopic findings randomly distributed among animals of all groups were endocarditis/myocarditis and pulmonary lesions. These findings are consistent with complications due to long-term catheter use and suggest no clinically relevant differences in end-organ toxicity between animals infused with FIVR versus commercial fluids. Copyright © 2016 Elsevier GmbH. All rights reserved.
Some Aspects of the Investigation of Random Vibration Influence on Ride Comfort
NASA Astrophysics Data System (ADS)
DEMIĆ, M.; LUKIĆ, J.; MILIĆ, Ž.
2002-05-01
Contemporary vehicles must satisfy high ride comfort criteria. This paper attempts to develop criteria for ride comfort improvement. The highest loading levels have been found to be in the vertical direction and the lowest in lateral direction in passenger cars and trucks. These results have formed the basis for further laboratory and field investigations. An investigation of the human body behaviour under random vibrations is reported in this paper. The research included two phases; biodynamic research and ride comfort investigation. A group of 30 subjects was tested. The influence of broadband random vibrations on the human body was examined through the seat-to-head transmissibility function (STHT). Initially, vertical and fore and aft vibrations were considered. Multi-directional vibration was also investigated. In the biodynamic research, subjects were exposed to 0·55, 1·75 and 2·25 m/s2 r.m.s. vibration levels in the 0·5- 40 Hz frequency domain. The influence of sitting position on human body behaviour under two axial vibrations was also examined. Data analysis showed that the human body behaviour under two-directional random vibrations could not be approximated by superposition of one-directional random vibrations. Non-linearity of the seated human body in the vertical and fore and aft directions was observed. Seat-backrest angle also influenced STHT. In the second phase of experimental research, a new method for the assessment of the influence of narrowband random vibration on the human body was formulated and tested. It included determination of equivalent comfort curves in the vertical and fore and aft directions under one- and two-directional narrowband random vibrations. Equivalent comfort curves for durations of 2·5, 4 and 8 h were determined.
Efficient prediction designs for random fields.
Müller, Werner G; Pronzato, Luc; Rendas, Joao; Waldl, Helmut
2015-03-01
For estimation and predictions of random fields, it is increasingly acknowledged that the kriging variance may be a poor representative of true uncertainty. Experimental designs based on more elaborate criteria that are appropriate for empirical kriging (EK) are then often non-space-filling and very costly to determine. In this paper, we investigate the possibility of using a compound criterion inspired by an equivalence theorem type relation to build designs quasi-optimal for the EK variance when space-filling designs become unsuitable. Two algorithms are proposed, one relying on stochastic optimization to explicitly identify the Pareto front, whereas the second uses the surrogate criteria as local heuristic to choose the points at which the (costly) true EK variance is effectively computed. We illustrate the performance of the algorithms presented on both a simple simulated example and a real oceanographic dataset. © 2014 The Authors. Applied Stochastic Models in Business and Industry published by John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Dentoni Litta, Eugenio; Ritzenthaler, Romain; Schram, Tom; Spessot, Alessio; O’Sullivan, Barry; Machkaoutsan, Vladimir; Fazan, Pierre; Ji, Yunhyuck; Mannaert, Geert; Lorant, Christophe; Sebaai, Farid; Thiam, Arame; Ercken, Monique; Demuynck, Steven; Horiguchi, Naoto
2018-04-01
Integration of high-k/metal gate stacks in peripheral transistors is a major candidate to ensure continued scaling of dynamic random access memory (DRAM) technology. In this paper, the CMOS integration of diffusion and gate replacement (D&GR) high-k/metal gate stacks is investigated, evaluating four different approaches for the critical patterning step of removing the N-type field effect transistor (NFET) effective work function (eWF) shifter stack from the P-type field effect transistor (PFET) area. The effect of plasma exposure during the patterning step is investigated in detail and found to have a strong impact on threshold voltage tunability. A CMOS integration scheme based on an experimental wet-compatible photoresist is developed and the fulfillment of the main device metrics [equivalent oxide thickness (EOT), eWF, gate leakage current density, on/off currents, short channel control] is demonstrated.
Invariance property of wave scattering through disordered media
Pierrat, Romain; Ambichl, Philipp; Gigan, Sylvain; Haber, Alexander; Carminati, Rémi; Rotter, Stefan
2014-01-01
A fundamental insight in the theory of diffusive random walks is that the mean length of trajectories traversing a finite open system is independent of the details of the diffusion process. Instead, the mean trajectory length depends only on the system's boundary geometry and is thus unaffected by the value of the mean free path. Here we show that this result is rooted on a much deeper level than that of a random walk, which allows us to extend the reach of this universal invariance property beyond the diffusion approximation. Specifically, we demonstrate that an equivalent invariance relation also holds for the scattering of waves in resonant structures as well as in ballistic, chaotic or in Anderson localized systems. Our work unifies a number of specific observations made in quite diverse fields of science ranging from the movement of ants to nuclear scattering theory. Potential experimental realizations using light fields in disordered media are discussed. PMID:25425671
Interplanetary medium data book. Supplement 3: 1977-1985
NASA Technical Reports Server (NTRS)
Couzens, David A.; King, Joseph H.
1986-01-01
The updating of the hourly resolution, near-Earth solar wind data compilation is discussed. Data plots and listings are then presented. In the text, the time shifting of ISEE 3 fine-scale magnetic field and and plasma data, using corotation delay, and the normalization of IMP-MIT and ISEE densities and temperatures to equivalent IMP-LANL values, are discussed in detail. The levels of arbitrariness in combining data sets, and of random differences between data sets, are elucidated.
Citerio, Giuseppe; Franzosi, Maria Grazia; Latini, Roberto; Masson, Serge; Barlera, Simona; Guzzetti, Stefano; Pesenti, Antonio
2009-04-06
Many studies have attempted to determine the "best" anaesthetic technique for neurosurgical procedures in patients without intracranial hypertension. So far, no study comparing intravenous (IA) with volatile-based neuroanaesthesia (VA) has been able to demonstrate major outcome differences nor a superiority of one of the two strategies in patients undergoing elective supratentorial neurosurgery. Therefore, current practice varies and includes the use of either volatile or intravenous anaesthetics in addition to narcotics. Actually the choice of the anaesthesiological strategy depends only on the anaesthetists' preferences or institutional policies. This trial, named NeuroMorfeo, aims to assess the equivalence between volatile and intravenous anaesthetics for neurosurgical procedures. NeuroMorfeo is a multicenter, randomized, open label, controlled trial, based on an equivalence design. Patients aged between 18 and 75 years, scheduled for elective craniotomy for supratentorial lesion without signs of intracranial hypertension, in good physical state (ASA I-III) and Glasgow Coma Scale (GCS) equal to 15, are randomly assigned to one of three anaesthesiological strategies (two VA arms, sevoflurane + fentanyl or sevoflurane + remifentanil, and one IA, propofol + remifentanil). The equivalence between intravenous and volatile-based neuroanaesthesia will be evaluated by comparing the intervals required to reach, after anaesthesia discontinuation, a modified Aldrete score > or = 9 (primary end-point). Two statistical comparisons have been planned: 1) sevoflurane + fentanyl vs. propofol + remifentanil; 2) sevoflurane + remifentanil vs. propofol + remifentanil. Secondary end-points include: an assessment of neurovegetative stress based on (a) measurement of urinary catecholamines and plasma and urinary cortisol and (b) estimate of sympathetic/parasympathetic balance by power spectrum analyses of electrocardiographic tracings recorded during anaesthesia; intraoperative adverse events; evaluation of surgical field; postoperative adverse events; patient's satisfaction and analysis of costs. 411 patients will be recruited in 14 Italian centers during an 18-month period. We presented the development phase of this anaesthesiological on-going trial. The recruitment started December 4th, 2007 and up to 4th, December 2008, 314 patients have been enrolled.
Tahmasebi Birgani, Mohamad J; Chegeni, Nahid; Zabihzadeh, Mansoor; Hamzian, Nima
2014-01-01
Equivalent field is frequently used for central axis depth-dose calculations of rectangular- and irregular-shaped photon beams. As most of the proposed models to calculate the equivalent square field are dosimetry based, a simple physical-based method to calculate the equivalent square field size was used as the basis of this study. The table of the sides of the equivalent square or rectangular fields was constructed and then compared with the well-known tables by BJR and Venselaar, et al. with the average relative error percentage of 2.5 ± 2.5% and 1.5 ± 1.5%, respectively. To evaluate the accuracy of this method, the percentage depth doses (PDDs) were measured for some special irregular symmetric and asymmetric treatment fields and their equivalent squares for Siemens Primus Plus linear accelerator for both energies, 6 and 18MV. The mean relative differences of PDDs measurement for these fields and their equivalent square was approximately 1% or less. As a result, this method can be employed to calculate equivalent field not only for rectangular fields but also for any irregular symmetric or asymmetric field. © 2013 American Association of Medical Dosimetrists Published by American Association of Medical Dosimetrists All rights reserved.
NASA Astrophysics Data System (ADS)
Paramonov, L. E.
2012-05-01
Light scattering by isotropic ensembles of ellipsoidal particles is considered in the Rayleigh-Gans-Debye approximation. It is proved that randomly oriented ellipsoidal particles are optically equivalent to polydisperse randomly oriented spheroidal particles and polydisperse spherical particles. Density functions of the shape and size distributions for equivalent ensembles of spheroidal and spherical particles are presented. In the anomalous diffraction approximation, equivalent ensembles of particles are shown to also have equal extinction, scattering, and absorption coefficients. Consequences of optical equivalence are considered. The results are illustrated by numerical calculations of the angular dependence of the scattering phase function using the T-matrix method and the Mie theory.
ISAAC Photometric Comparison of ECLIPSE Jitter and the ORAC-DR Equivalent Recipe for ISAAC
NASA Astrophysics Data System (ADS)
Currie, M. J.
2005-12-01
Motivated by a request from astronomers demanding accurate and consistent infrared photometry, I compare the photometry and quality of mosaics generated by the ECLIPSE jitter task and the ORAC-DR JITTER_SELF_FLAT recipe in two fields. The current (v4.9.0) ECLIPSE produces photometry a few percent fainter than ORAC-DR; the systematic trend with magnitude seen in v4.4.1 is now removed. Random errors arising from poor flat-fielding are not resolved. ECLIPSE generates noisier mosaics; ORAC-DR has poorer bias removal in crowded fields and defaults to larger mosaics. ORAC-DR runs a few times slower than ECLIPSE, but its recipe development is measured in weeks, not years.
Theoretical and observational analysis of spacecraft fields
NASA Technical Reports Server (NTRS)
Neubauer, F. M.; Schatten, K. H.
1972-01-01
In order to investigate the nondipolar contributions of spacecraft magnetic fields a simple magnetic field model is proposed. This model consists of randomly oriented dipoles in a given volume. Two sets of formulas are presented which give the rms-multipole field components, for isotropic orientations of the dipoles at given positions and for isotropic orientations of the dipoles distributed uniformly throughout a cube or sphere. The statistical results for an 8 cu m cube together with individual examples computed numerically show the following features: Beyond about 2 to 3 m distance from the center of the cube, the field is dominated by an equivalent dipole. The magnitude of the magnetic moment of the dipolar part is approximated by an expression for equal magnetic moments or generally by the Pythagorean sum of the dipole moments. The radial component is generally greater than either of the transverse components for the dipole portion as well as for the nondipolar field contributions.
ERIC Educational Resources Information Center
McNeil, Nicole M.; Chesney, Dana L.; Matthews, Percival G.; Fyfe, Emily R.; Petersen, Lori A.; Dunwiddie, April E.; Wheeler, Mary C.
2012-01-01
This experiment tested the hypothesis that organizing arithmetic fact practice by equivalent values facilitates children's understanding of math equivalence. Children (M age = 8 years 6 months, N = 104) were randomly assigned to 1 of 3 practice conditions: (a) equivalent values, in which problems were grouped by equivalent sums (e.g., 3 + 4 = 7, 2…
A simple calculation method for determination of equivalent square field.
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-04-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning.
NASA Astrophysics Data System (ADS)
Cheraghalizadeh, Jafar; Najafi, Morteza N.; Mohammadzadeh, Hossein
2018-05-01
The effect of metallic nano-particles (MNPs) on the electrostatic potential of a disordered 2D dielectric media is considered. The disorder in the media is assumed to be white-noise Coulomb impurities with normal distribution. To realize the correlations between the MNPs we have used the Ising model with an artificial temperature T that controls the number of MNPs as well as their correlations. In the T → 0 limit, one retrieves the Gaussian free field (GFF), and in the finite temperature the problem is equivalent to a GFF in iso-potential islands. The problem is argued to be equivalent to a scale-invariant random surface with some critical exponents which vary with T and correspondingly are correlation-dependent. Two type of observables have been considered: local and global quantities. We have observed that the MNPs soften the random potential and reduce its statistical fluctuations. This softening is observed in the local as well as the geometrical quantities. The correlation function of the electrostatic and its total variance are observed to be logarithmic just like the GFF, i.e. the roughness exponent remains zero for all temperatures, whereas the proportionality constants scale with T - T c . The fractal dimension of iso-potential lines ( D f ), the exponent of the distribution function of the gyration radius ( τ r ), and the loop lengths ( τ l ), and also the exponent of the loop Green function x l change in terms of T - T c in a power-law fashion, with some critical exponents reported in the text. Importantly we have observed that D f ( T) - D f ( T c ) 1/√ ξ( T), in which ξ( T) is the spin correlation length in the Ising model.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-04-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided.
Goedhart, Paul W; van der Voet, Hilko; Baldacchino, Ferdinando; Arpaia, Salvatore
2014-01-01
Genetic modification of plants may result in unintended effects causing potentially adverse effects on the environment. A comparative safety assessment is therefore required by authorities, such as the European Food Safety Authority, in which the genetically modified plant is compared with its conventional counterpart. Part of the environmental risk assessment is a comparative field experiment in which the effect on non-target organisms is compared. Statistical analysis of such trials come in two flavors: difference testing and equivalence testing. It is important to know the statistical properties of these, for example, the power to detect environmental change of a given magnitude, before the start of an experiment. Such prospective power analysis can best be studied by means of a statistical simulation model. This paper describes a general framework for simulating data typically encountered in environmental risk assessment of genetically modified plants. The simulation model, available as Supplementary Material, can be used to generate count data having different statistical distributions possibly with excess-zeros. In addition the model employs completely randomized or randomized block experiments, can be used to simulate single or multiple trials across environments, enables genotype by environment interaction by adding random variety effects, and finally includes repeated measures in time following a constant, linear or quadratic pattern in time possibly with some form of autocorrelation. The model also allows to add a set of reference varieties to the GM plants and its comparator to assess the natural variation which can then be used to set limits of concern for equivalence testing. The different count distributions are described in some detail and some examples of how to use the simulation model to study various aspects, including a prospective power analysis, are provided. PMID:24834325
Comparison of Nonlinear Random Response Using Equivalent Linearization and Numerical Simulation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2000-01-01
A recently developed finite-element-based equivalent linearization approach for the analysis of random vibrations of geometrically nonlinear multiple degree-of-freedom structures is validated. The validation is based on comparisons with results from a finite element based numerical simulation analysis using a numerical integration technique in physical coordinates. In particular, results for the case of a clamped-clamped beam are considered for an extensive load range to establish the limits of validity of the equivalent linearization approach.
A simple calculation method for determination of equivalent square field
Shafiei, Seyed Ali; Hasanzadeh, Hadi; Shafiei, Seyed Ahmad
2012-01-01
Determination of the equivalent square fields for rectangular and shielded fields is of great importance in radiotherapy centers and treatment planning software. This is accomplished using standard tables and empirical formulas. The goal of this paper is to present a formula based on analysis of scatter reduction due to inverse square law to obtain equivalent field. Tables are published by different agencies such as ICRU (International Commission on Radiation Units and measurements), which are based on experimental data; but there exist mathematical formulas that yield the equivalent square field of an irregular rectangular field which are used extensively in computation techniques for dose determination. These processes lead to some complicated and time-consuming formulas for which the current study was designed. In this work, considering the portion of scattered radiation in absorbed dose at a point of measurement, a numerical formula was obtained based on which a simple formula was developed to calculate equivalent square field. Using polar coordinate and inverse square law will lead to a simple formula for calculation of equivalent field. The presented method is an analytical approach based on which one can estimate the equivalent square field of a rectangular field and may be used for a shielded field or an off-axis point. Besides, one can calculate equivalent field of rectangular field with the concept of decreased scatter radiation with inverse square law with a good approximation. This method may be useful in computing Percentage Depth Dose and Tissue-Phantom Ratio which are extensively used in treatment planning. PMID:22557801
Log-normal distribution from a process that is not multiplicative but is additive.
Mouri, Hideaki
2013-10-01
The central limit theorem ensures that a sum of random variables tends to a Gaussian distribution as their total number tends to infinity. However, for a class of positive random variables, we find that the sum tends faster to a log-normal distribution. Although the sum tends eventually to a Gaussian distribution, the distribution of the sum is always close to a log-normal distribution rather than to any Gaussian distribution if the summands are numerous enough. This is in contrast to the current consensus that any log-normal distribution is due to a product of random variables, i.e., a multiplicative process, or equivalently to nonlinearity of the system. In fact, the log-normal distribution is also observable for a sum, i.e., an additive process that is typical of linear systems. We show conditions for such a sum, an analytical example, and an application to random scalar fields such as those of turbulence.
Asynchronous Replication and Autosome-Pair Non-Equivalence in Human Embryonic Stem Cells
Dutta, Devkanya; Ensminger, Alexander W.; Zucker, Jacob P.; Chess, Andrew
2009-01-01
A number of mammalian genes exhibit the unusual properties of random monoallelic expression and random asynchronous replication. Such exceptional genes include genes subject to X inactivation and autosomal genes including odorant receptors, immunoglobulins, interleukins, pheromone receptors, and p120 catenin. In differentiated cells, random asynchronous replication of interspersed autosomal genes is coordinated at the whole chromosome level, indicative of chromosome-pair non-equivalence. Here we have investigated the replication pattern of the random asynchronously replicating genes in undifferentiated human embryonic stem cells, using fluorescence in situ hybridization based assay. We show that allele-specific replication of X-linked genes and random monoallelic autosomal genes occur in human embryonic stem cells. The direction of replication is coordinated at the whole chromosome level and can cross the centromere, indicating the existence of autosome-pair non-equivalence in human embryonic stem cells. These results suggest that epigenetic mechanism(s) that randomly distinguish between two parental alleles are emerging in the cells of the inner cell mass, the source of human embryonic stem cells. PMID:19325893
Discrete Huygens’ modeling for the characterization of a sound absorbing medium
NASA Astrophysics Data System (ADS)
Chai, L.; Kagawa, Y.
2007-07-01
Based on the equivalence between the wave propagation in the electrical transmission-lines and acoustic tubes, the authors proposed the use of the transmission-line matrix modeling (TLM) for time-domain solution method of the sound field. TLM is known in electromagnetic engineering community, which is equivalent to the discrete Huygens' modeling. The wave propagation is simulated by tracing the sequences of the transmission and scattering of impulses. The theory and the demonstrated examples are presented in the references, in which a sound absorbing field was preliminarily considered to be a medium with simple acoustic resistance independent of frequency and the angle of incidence for the absorbing layer placed on the room wall surface. The present work is concerned with the time-domain response for the characterization of the sound absorbing materials. A lossy component with variable propagation velocity is introduced for sound absorbing materials to facilitate the energy consumption. The frequency characteristics of the absorption coefficient are also considered for the normal, oblique and random incidence. Some numerical demonstrations show that the present modeling provide a reasonable modeling of the homogeneous sound absorbing materials in time domain.
NASA Technical Reports Server (NTRS)
Mishchenko, Michael I.; Dlugach, Janna M.; Zakharova, Nadezhda T.
2016-01-01
The numerically exact superposition T-matrix method is used to model far-field electromagnetic scattering by two types of particulate object. Object 1 is a fixed configuration which consists of N identical spherical particles (with N 200 or 400) quasi-randomly populating a spherical volume V having a median size parameter of 50. Object 2 is a true discrete random medium (DRM) comprising the same number N of particles randomly moving throughout V. The median particle size parameter is fixed at 4. We show that if Object 1 is illuminated by a quasi-monochromatic parallel beam then it generates a typical speckle pattern having no resemblance to the scattering pattern generated by Object 2. However, if Object 1 is illuminated by a parallel polychromatic beam with a 10 bandwidth then it generates a scattering pattern that is largely devoid of speckles and closely reproduces the quasi-monochromatic pattern generated by Object 2. This result serves to illustrate the capacity of the concept of electromagnetic scattering by a DRM to encompass fixed quasi-random particulate samples provided that they are illuminated by polychromatic light.
Method and apparatus for enhancing vortex pinning by conformal crystal arrays
Janko, Boldizsar; Reichhardt, Cynthia; Reichhardt, Charles; Ray, Dipanjan
2015-07-14
Disclosed is a method and apparatus for strongly enhancing vortex pinning by conformal crystal arrays. The conformal crystal array is constructed by a conformal transformation of a hexagonal lattice, producing a non-uniform structure with a gradient where the local six-fold coordination of the pinning sites is preserved, and with an arching effect. The conformal pinning arrays produce significantly enhanced vortex pinning over a much wider range of field than that found for other vortex pinning geometries with an equivalent number of vortex pinning sites, such as random, square, and triangular.
Hecksel, D; Anferov, V; Fitzek, M; Shahnazi, K
2010-06-01
Conventional proton therapy facilities use double scattering nozzles, which are optimized for delivery of a few fixed field sizes. Similarly, uniform scanning nozzles are commissioned for a limited number of field sizes. However, cases invariably occur where the treatment field is significantly different from these fixed field sizes. The purpose of this work was to determine the impact of the radiation field conformity to the patient-specific collimator on the secondary neutron dose equivalent. Using a WENDI-II neutron detector, the authors experimentally investigated how the neutron dose equivalent at a particular point of interest varied with different collimator sizes, while the beam spreading was kept constant. The measurements were performed for different modes of dose delivery in proton therapy, all of which are available at the Midwest Proton Radiotherapy Institute (MPRI): Double scattering, uniform scanning delivering rectangular fields, and uniform scanning delivering circular fields. The authors also studied how the neutron dose equivalent changes when one changes the amplitudes of the scanned field for a fixed collimator size. The secondary neutron dose equivalent was found to decrease linearly with the collimator area for all methods of dose delivery. The relative values of the neutron dose equivalent for a collimator with a 5 cm diameter opening using 88 MeV protons were 1.0 for the double scattering field, 0.76 for rectangular uniform field, and 0.6 for the circular uniform field. Furthermore, when a single circle wobbling was optimized for delivery of a uniform field 5 cm in diameter, the secondary neutron dose equivalent was reduced by a factor of 6 compared to the double scattering nozzle. Additionally, when the collimator size was kept constant, the neutron dose equivalent at the given point of interest increased linearly with the area of the scanned proton beam. The results of these experiments suggest that the patient-specific collimator is a significant contributor to the secondary neutron dose equivalent to a distant organ at risk. Improving conformity of the radiation field to the patient-specific collimator can significantly reduce secondary neutron dose equivalent to the patient. Therefore, it is important to increase the number of available generic field sizes in double scattering systems as well as in uniform scanning nozzles.
Micromechanics-based magneto-elastic constitutive modeling of particulate composites
NASA Astrophysics Data System (ADS)
Yin, Huiming
Modified Green's functions are derived for three situations: a magnetic field caused by a local magnetization, a displacement field caused by a local body force and a displacement field caused by a local prescribed eigenstrain. Based on these functions, an explicit solution is derived for two magnetic particles embedded in the infinite medium under external magnetic and mechanical loading. A general solution for numerable magnetic particles embedded in an infinite domain is then provided in integral form. Two-phase composites containing spherical magnetic particles of the same size are considered for three kinds of microstructures. With chain-structured composites, particle interactions in the same chain are considered and a transversely isotropic effective elasticity is obtained. For periodic composites, an eight-particle interaction model is developed and provides a cubic symmetric effective elasticity. In the random composite, pair-wise particle interactions are integrated from all possible positions and an isotropic effective property is reached. This method is further extended to functionally graded composites. Magneto-mechanical behavior is studied for the chain-structured composite and the random composite. Effective magnetic permeability, effective magnetostriction and field-dependent effective elasticity are investigated. It is seen that the chain-structured composite is more sensitive to the magnetic field than the random composite; a composite consisting of only 5% of chain-structured particles can provide a larger magnetostriction and a larger change of effective elasticity than an equivalent composite consisting of 30% of random dispersed particles. Moreover, the effective shear modulus of the chain-structured composite rapidly increases with the magnetic field, while that for the random composite decreases. An effective hyperelastic constitutive model is further developed for a magnetostrictive particle-filled elastomer, which is sampled by using a network of body-centered cubic lattices of particles connected by macromolecular chains. The proposed hyperelastic model is able to characterize overall nonlinear elastic stress-stretch relations of the composites under general three-dimensional loading. It is seen that the effective strain energy density is proportional to the length of stretched chains in unit volume and volume fraction of particles.
Kayupov, Erdan; Fillingham, Yale A; Okroj, Kamil; Plummer, Darren R; Moric, Mario; Gerlinger, Tad L; Della Valle, Craig J
2017-03-01
Tranexamic acid is an antifibrinolytic that has been shown to reduce blood loss and the need for transfusions when administered intravenously in total hip arthroplasty. Oral formulations of the drug are available at a fraction of the cost of the intravenous preparation. The purpose of this randomized controlled trial was to determine if oral and intravenous formulations of tranexamic acid have equivalent blood-sparing properties. In this double-blinded trial, 89 patients undergoing primary total hip arthroplasty were randomized to receive 1.95 g of tranexamic acid orally 2 hours preoperatively or a 1-g tranexamic acid intravenous bolus in the operating room prior to incision; 6 patients were eventually excluded for protocol deviations, leaving 83 patients available for study. The primary outcome was the reduction of hemoglobin concentration. Power analysis determined that 28 patients were required in each group with a ±1.0 g/dL hemoglobin equivalence margin between groups with an alpha of 5% and a power of 80%. Equivalence analysis was performed with a two one-sided test (TOST) in which a p value of <0.05 indicated equivalence between treatments. Forty-three patients received intravenous tranexamic acid, and 40 patients received oral tranexamic acid. Patient demographic characteristics were similar between groups, suggesting successful randomization. The mean reduction of hemoglobin was similar between oral and intravenous groups (3.67 g/dL compared with 3.53 g/dL; p = 0.0008, equivalence). Similarly, the mean total blood loss was equivalent between oral and intravenous administration (1,339 mL compared with 1,301 mL; p = 0.034, equivalence). Three patients (7.5%) in the oral group and one patient (2.3%) in the intravenous group were transfused, but the difference was not significant (p = 0.35). None of the patients in either group experienced a thromboembolic event. Oral tranexamic acid provides equivalent reductions in blood loss in the setting of primary total hip arthroplasty, at a greatly reduced cost, compared with the intravenous formulation. Therapeutic Level I. See Instructions for Authors for a complete description of levels of evidence.
Two-dimensional Magnetism in Arrays of Superconducting Rings
NASA Astrophysics Data System (ADS)
Reich, Daniel H.
1996-03-01
An array of superconducting rings in an applied field corresponding to a flux of Φ0 /2 per ring behaves like a 2D Ising antiferromagnet. Each ring has two energetically equivalent states with equal and opposite magnetic moments due to fluxoid quantization, and the dipolar coupling between rings favors antiparallel alignment of the moments. Using SQUID magnetometry and scanning Hall probe microscopy, we have studied the dynamics and magnetic configurations of micron-size aluminum rings on square, triangular, honeycomb, and kagomé lattices. We have found that there are significant antiferromagnetic correlations between rings, and that effects of geometrical frustration can be observed on the triangular and kagomé lattices. Long range correlations on the other lattices are suppressed by the analog of spin freezing that locks the rings in metastable states at low temperatures, and by quenched disorder due to imperfections in the fabrication. This disorder produces a roughly 1% variation in the rings' areas, which translates into an effective random field on the spins. The ring arrays are thus an extremely good realization of the 2D random-field Ising model. (Performed in collaboration with D. Davidović, S. Kumar, J. Siegel, S. B. Field, R. C. Tiberio, R. Hey, and K. Ploog.) (Supported by NSF grants DMR-9222541, and DMR-9357518, and by the David and Lucile Packard Foundation.)
Mean-field equations for neuronal networks with arbitrary degree distributions.
Nykamp, Duane Q; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Mean-field equations for neuronal networks with arbitrary degree distributions
NASA Astrophysics Data System (ADS)
Nykamp, Duane Q.; Friedman, Daniel; Shaker, Sammy; Shinn, Maxwell; Vella, Michael; Compte, Albert; Roxin, Alex
2017-04-01
The emergent dynamics in networks of recurrently coupled spiking neurons depends on the interplay between single-cell dynamics and network topology. Most theoretical studies on network dynamics have assumed simple topologies, such as connections that are made randomly and independently with a fixed probability (Erdös-Rényi network) (ER) or all-to-all connected networks. However, recent findings from slice experiments suggest that the actual patterns of connectivity between cortical neurons are more structured than in the ER random network. Here we explore how introducing additional higher-order statistical structure into the connectivity can affect the dynamics in neuronal networks. Specifically, we consider networks in which the number of presynaptic and postsynaptic contacts for each neuron, the degrees, are drawn from a joint degree distribution. We derive mean-field equations for a single population of homogeneous neurons and for a network of excitatory and inhibitory neurons, where the neurons can have arbitrary degree distributions. Through analysis of the mean-field equations and simulation of networks of integrate-and-fire neurons, we show that such networks have potentially much richer dynamics than an equivalent ER network. Finally, we relate the degree distributions to so-called cortical motifs.
Ono, Kaoru; Endo, Satoru; Tanaka, Kenichi; Hoshi, Masaharu; Hirokawa, Yutaka
2010-01-01
Purpose: In this study, the authors evaluated the accuracy of dose calculations performed by the convolution∕superposition based anisotropic analytical algorithm (AAA) in lung equivalent heterogeneities with and without bone equivalent heterogeneities. Methods: Calculations of PDDs using the AAA and Monte Carlo simulations (MCNP4C) were compared to ionization chamber measurements with a heterogeneous phantom consisting of lung equivalent and bone equivalent materials. Both 6 and 10 MV photon beams of 4×4 and 10×10 cm2 field sizes were used for the simulations. Furthermore, changes of energy spectrum with depth for the heterogeneous phantom using MCNP were calculated. Results: The ionization chamber measurements and MCNP calculations in a lung equivalent phantom were in good agreement, having an average deviation of only 0.64±0.45%. For both 6 and 10 MV beams, the average deviation was less than 2% for the 4×4 and 10×10 cm2 fields in the water-lung equivalent phantom and the 4×4 cm2 field in the water-lung-bone equivalent phantom. Maximum deviations for the 10×10 cm2 field in the lung equivalent phantom before and after the bone slab were 5.0% and 4.1%, respectively. The Monte Carlo simulation demonstrated an increase of the low-energy photon component in these regions, more for the 10×10 cm2 field compared to the 4×4 cm2 field. Conclusions: The low-energy photon by Monte Carlo simulation component increases sharply in larger fields when there is a significant presence of bone equivalent heterogeneities. This leads to great changes in the build-up and build-down at the interfaces of different density materials. The AAA calculation modeling of the effect is not deemed to be sufficiently accurate. PMID:20879604
Classes of Split-Plot Response Surface Designs for Equivalent Estimation
NASA Technical Reports Server (NTRS)
Parker, Peter A.; Kowalski, Scott M.; Vining, G. Geoffrey
2006-01-01
When planning an experimental investigation, we are frequently faced with factors that are difficult or time consuming to manipulate, thereby making complete randomization impractical. A split-plot structure differentiates between the experimental units associated with these hard-to-change factors and others that are relatively easy-to-change and provides an efficient strategy that integrates the restrictions imposed by the experimental apparatus. Several industrial and scientific examples are presented to illustrate design considerations encountered in the restricted randomization context. In this paper, we propose classes of split-plot response designs that provide an intuitive and natural extension from the completely randomized context. For these designs, the ordinary least squares estimates of the model are equivalent to the generalized least squares estimates. This property provides best linear unbiased estimators and simplifies model estimation. The design conditions that allow for equivalent estimation are presented enabling design construction strategies to transform completely randomized Box-Behnken, equiradial, and small composite designs into a split-plot structure.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wroe, Andrew; Centre for Medical Radiation Physics, University of Wollongong, Wollongong; Clasie, Ben
2009-01-01
Purpose: Microdosimetric measurements were performed at Massachusetts General Hospital, Boston, MA, to assess the dose equivalent external to passively delivered proton fields for various clinical treatment scenarios. Methods and Materials: Treatment fields evaluated included a prostate cancer field, cranial and spinal medulloblastoma fields, ocular melanoma field, and a field for an intracranial stereotactic treatment. Measurements were completed with patient-specific configurations of clinically relevant treatment settings using a silicon-on-insulator microdosimeter placed on the surface of and at various depths within a homogeneous Lucite phantom. The dose equivalent and average quality factor were assessed as a function of both lateral displacement frommore » the treatment field edge and distance downstream of the beam's distal edge. Results: Dose-equivalent value range was 8.3-0.3 mSv/Gy (2.5-60-cm lateral displacement) for a typical prostate cancer field, 10.8-0.58 mSv/Gy (2.5-40-cm lateral displacement) for the cranial medulloblastoma field, 2.5-0.58 mSv/Gy (5-20-cm lateral displacement) for the spinal medulloblastoma field, and 0.5-0.08 mSv/Gy (2.5-10-cm lateral displacement) for the ocular melanoma field. Measurements of external field dose equivalent for the stereotactic field case showed differences as high as 50% depending on the modality of beam collimation. Average quality factors derived from this work ranged from 2-7, with the value dependent on the position within the phantom in relation to the primary beam. Conclusions: This work provides a valuable and clinically relevant comparison of the external field dose equivalents for various passively scattered proton treatment fields.« less
Olsho, Lauren Ew; Klerman, Jacob A; Wilde, Parke E; Bartlett, Susan
2016-08-01
US fruit and vegetable (FV) intake remains below recommendations, particularly for low-income populations. Evidence on effectiveness of rebates in addressing this shortfall is limited. This study evaluated the USDA Healthy Incentives Pilot (HIP), which offered rebates to Supplemental Nutrition Assistance Program (SNAP) participants for purchasing targeted FVs (TFVs). As part of a randomized controlled trial in Hampden County, Massachusetts, 7500 randomly selected SNAP households received a 30% rebate on TFVs purchased with SNAP benefits. The remaining 47,595 SNAP households in the county received usual benefits. Adults in 5076 HIP and non-HIP households were randomly sampled for telephone surveys, including 24-h dietary recall interviews. Surveys were conducted at baseline (1-3 mo before implementation) and in 2 follow-up rounds (4-6 mo and 9-11 mo after implementation). 2784 adults (1388 HIP, 1396 non-HIP) completed baseline interviews; data were analyzed for 2009 adults (72%) who also completed ≥1 follow-up interview. Regression-adjusted mean TFV intake at follow-up was 0.24 cup-equivalents/d (95% CI: 0.13, 0.34 cup-equivalents/d) higher among HIP participants. Across all fruit and vegetables (AFVs), regression-adjusted mean intake was 0.32 cup-equivalents/d (95% CI: 0.17, 0.48 cup-equivalents/d) higher among HIP participants. The AFV-TFV difference was explained by greater intake of 100% fruit juice (0.10 cup-equivalents/d; 95% CI: 0.02, 0.17 cup-equivalents/d); juice purchases did not earn the HIP rebate. Refined grain intake was 0.43 ounce-equivalents/d lower (95% CI: -0.69, -0.16 ounce-equivalents/d) among HIP participants, possibly indicating substitution effects. Increased AFV intake and decreased refined grain intake contributed to higher Healthy Eating Index-2010 scores among HIP participants (4.7 points; 95% CI: 2.4, 7.1 points). The HIP significantly increased FV intake among SNAP participants, closing ∼20% of the gap relative to recommendations and increasing dietary quality. More research on mechanisms of action is warranted. The HIP trial was registered at clinicaltrials.gov as NCT02651064. © 2016 American Society for Nutrition.
Flacco, Maria Elena; Manzoli, Lamberto; Boccia, Stefania; Capasso, Lorenzo; Aleksovska, Katina; Rosso, Annalisa; Scaioli, Giacomo; De Vito, Corrado; Siliquini, Roberta; Villari, Paolo; Ioannidis, John P A
2015-07-01
To map the current status of head-to-head comparative randomized evidence and to assess whether funding may impact on trial design and results. From a 50% random sample of the randomized controlled trials (RCTs) published in journals indexed in PubMed during 2011, we selected the trials with ≥ 100 participants, evaluating the efficacy and safety of drugs, biologics, and medical devices through a head-to-head comparison. We analyzed 319 trials. Overall, 238,386 of the 289,718 randomized subjects (82.3%) were included in the 182 trials funded by companies. Of the 182 industry-sponsored trials, only 23 had two industry sponsors and only three involved truly antagonistic comparisons. Industry-sponsored trials were larger, more commonly registered, used more frequently noninferiority/equivalence designs, had higher citation impact, and were more likely to have "favorable" results (superiority or noninferiority/equivalence for the experimental treatment) than nonindustry-sponsored trials. Industry funding [odds ratio (OR) 2.8; 95% confidence interval (CI): 1.6, 4.7] and noninferiority/equivalence designs (OR 3.2; 95% CI: 1.5, 6.6), but not sample size, were strongly associated with "favorable" findings. Fifty-five of the 57 (96.5%) industry-funded noninferiority/equivalence trials got desirable "favorable" results. The literature of head-to-head RCTs is dominated by the industry. Industry-sponsored comparative assessments systematically yield favorable results for the sponsors, even more so when noninferiority designs are involved. Copyright © 2015 The Authors. Published by Elsevier Inc. All rights reserved.
Long-run growth rate in a random multiplicative model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pirjol, Dan
2014-08-01
We consider the long-run growth rate of the average value of a random multiplicative process x{sub i+1} = a{sub i}x{sub i} where the multipliers a{sub i}=1+ρexp(σW{sub i}₋1/2 σ²t{sub i}) have Markovian dependence given by the exponential of a standard Brownian motion W{sub i}. The average value (x{sub n}) is given by the grand partition function of a one-dimensional lattice gas with two-body linear attractive interactions placed in a uniform field. We study the Lyapunov exponent λ=lim{sub n→∞}1/n log(x{sub n}), at fixed β=1/2 σ²t{sub n}n, and show that it is given by the equation of state of the lattice gas inmore » thermodynamical equilibrium. The Lyapunov exponent has discontinuous partial derivatives along a curve in the (ρ, β) plane ending at a critical point (ρ{sub C}, β{sub C}) which is related to a phase transition in the equivalent lattice gas. Using the equivalence of the lattice gas with a bosonic system, we obtain the exact solution for the equation of state in the thermodynamical limit n → ∞.« less
Vehicular traffic noise prediction using soft computing approach.
Singh, Daljeet; Nigam, S P; Agrawal, V P; Kumar, Maneek
2016-12-01
A new approach for the development of vehicular traffic noise prediction models is presented. Four different soft computing methods, namely, Generalized Linear Model, Decision Trees, Random Forests and Neural Networks, have been used to develop models to predict the hourly equivalent continuous sound pressure level, Leq, at different locations in the Patiala city in India. The input variables include the traffic volume per hour, percentage of heavy vehicles and average speed of vehicles. The performance of the four models is compared on the basis of performance criteria of coefficient of determination, mean square error and accuracy. 10-fold cross validation is done to check the stability of the Random Forest model, which gave the best results. A t-test is performed to check the fit of the model with the field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Black holes as quantum gravity condensates
NASA Astrophysics Data System (ADS)
Oriti, Daniele; Pranzetti, Daniele; Sindoni, Lorenzo
2018-03-01
We model spherically symmetric black holes within the group field theory formalism for quantum gravity via generalized condensate states, involving sums over arbitrarily refined graphs (dual to three-dimensional triangulations). The construction relies heavily on both the combinatorial tools of random tensor models and the quantum geometric data of loop quantum gravity, both part of the group field theory formalism. Armed with the detailed microscopic structure, we compute the entropy associated with the black hole horizon, which turns out to be equivalently the Boltzmann entropy of its microscopic degrees of freedom and the entanglement entropy between the inside and outside regions. We recover the area law under very general conditions, as well as the Bekenstein-Hawking formula. The result is also shown to be generically independent of any specific value of the Immirzi parameter.
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chowdhary, Kenny; Najm, Habib N.
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
Bayesian estimation of Karhunen–Loève expansions; A random subspace approach
Chowdhary, Kenny; Najm, Habib N.
2016-04-13
One of the most widely-used statistical procedures for dimensionality reduction of high dimensional random fields is Principal Component Analysis (PCA), which is based on the Karhunen-Lo eve expansion (KLE) of a stochastic process with finite variance. The KLE is analogous to a Fourier series expansion for a random process, where the goal is to find an orthogonal transformation for the data such that the projection of the data onto this orthogonal subspace is optimal in the L 2 sense, i.e, which minimizes the mean square error. In practice, this orthogonal transformation is determined by performing an SVD (Singular Value Decomposition)more » on the sample covariance matrix or on the data matrix itself. Sampling error is typically ignored when quantifying the principal components, or, equivalently, basis functions of the KLE. Furthermore, it is exacerbated when the sample size is much smaller than the dimension of the random field. In this paper, we introduce a Bayesian KLE procedure, allowing one to obtain a probabilistic model on the principal components, which can account for inaccuracies due to limited sample size. The probabilistic model is built via Bayesian inference, from which the posterior becomes the matrix Bingham density over the space of orthonormal matrices. We use a modified Gibbs sampling procedure to sample on this space and then build a probabilistic Karhunen-Lo eve expansions over random subspaces to obtain a set of low-dimensional surrogates of the stochastic process. We illustrate this probabilistic procedure with a finite dimensional stochastic process inspired by Brownian motion.« less
NASA Astrophysics Data System (ADS)
Voronin, A. A.; Panchenko, V. Ya; Zheltikov, A. M.
2016-06-01
High-intensity ultrashort laser pulses propagating in gas media or in condensed matter undergo complex nonlinear spatiotemporal evolution where temporal transformations of optical field waveforms are strongly coupled to an intricate beam dynamics and ultrafast field-induced ionization processes. At the level of laser peak powers orders of magnitude above the critical power of self-focusing, the beam exhibits modulation instabilities, producing random field hot spots and breaking up into multiple noise-seeded filaments. This problem is described by a (3 + 1)-dimensional nonlinear field evolution equation, which needs to be solved jointly with the equation for ultrafast ionization of a medium. Analysis of this problem, which is equivalent to solving a billion-dimensional evolution problem, is only possible by means of supercomputer simulations augmented with coordinated big-data processing of large volumes of information acquired through theory-guiding experiments and supercomputations. Here, we review the main challenges of supercomputations and big-data processing encountered in strong-field ultrafast optical physics and discuss strategies to confront these challenges.
Schwartz, Seth J; Benet-Martínez, Verónica; Knight, George P; Unger, Jennifer B; Zamboanga, Byron L; Des Rosiers, Sabrina E; Stephens, Dionne P; Huang, Shi; Szapocznik, José
2014-03-01
The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate 2 sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (Hispanic and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge--that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from 2 independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures--self-esteem, depressive symptoms, and personal identity--were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures--Hispanic and U.S. practices, values, and identifications-met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. 2014 APA
Schwartz, Seth J.; Benet-Martínez, Verónica; Knight, George P.; Unger, Jennifer B.; Zamboanga, Byron L.; Des Rosiers, Sabrina E.; Stephens, Dionne; Huang, Shi; Szapocznik, José
2014-01-01
The present study used a randomized design, with fully bilingual Hispanic participants from the Miami area, to investigate two sets of research questions. First, we sought to ascertain the extent to which measures of acculturation (heritage and U.S. practices, values, and identifications) satisfied criteria for linguistic measurement equivalence. Second, we sought to examine whether cultural frame switching would emerge – that is, whether latent acculturation mean scores for U.S. acculturation would be higher among participants randomized to complete measures in English, and whether latent acculturation mean scores for Hispanic acculturation would be higher among participants randomized to complete measures in Spanish. A sample of 722 Hispanic students from a Hispanic-serving university participated in the study. Participants were first asked to complete translation tasks to verify that they were fully bilingual. Based on ratings from two independent coders, 574 participants (79.5% of the sample) qualified as fully bilingual and were randomized to complete the acculturation measures in either English or Spanish. Theoretically relevant criterion measures – self-esteem, depressive symptoms, and personal identity – were also administered in the randomized language. Measurement equivalence analyses indicated that all of the acculturation measures – Hispanic and U.S. practices, values, and identifications – met criteria for configural, weak/metric, strong/scalar, and convergent validity equivalence. These findings indicate that data generated using acculturation measures can, at least under some conditions, be combined or compared across languages of administration. Few latent mean differences emerged. These results are discussed in terms of the measurement of acculturation in linguistically diverse populations. PMID:24188146
Apipunyasopon, Lukkana; Srisatit, Somyot; Phaisangittisakul, Nakorn
2013-09-06
The purpose of the study was to investigate the use of the equivalent square formula for determining the surface dose from a rectangular photon beam. A 6 MV therapeutic photon beam delivered from a Varian Clinac 23EX medical linear accelerator was modeled using the EGS4nrc Monte Carlo simulation package. It was then used to calculate the dose in the build-up region from both square and rectangular fields. The field patterns were defined by various settings of the X- and Y-collimator jaw ranging from 5 to 20 cm. Dose measurements were performed using a thermoluminescence dosimeter and a Markus parallel-plate ionization chamber on the four square fields (5 × 5, 10 × 10, 15 × 15, and 20 × 20 cm2). The surface dose was acquired by extrapolating the build-up doses to the surface. An equivalent square for a rectangular field was determined using the area-to-perimeter formula, and the surface dose of the equivalent square was estimated using the square-field data. The surface dose of square field increased linearly from approximately 10% to 28% as the side of the square field increased from 5 to 20 cm. The influence of collimator exchange on the surface dose was found to be not significant. The difference in the percentage surface dose of the rectangular field compared to that of the relevant equivalent square was insignificant and can be clinically neglected. The use of the area-to-perimeter formula for an equivalent square field can provide a clinically acceptable surface dose estimation for a rectangular field from a 6 MV therapy photon beam.
NASA Astrophysics Data System (ADS)
Zacharatou Jarlskog, Christina; Lee, Choonik; Bolch, Wesley E.; Xu, X. George; Paganetti, Harald
2008-02-01
Proton beams used for radiotherapy will produce neutrons when interacting with matter. The purpose of this study was to quantify the equivalent dose to tissue due to secondary neutrons in pediatric and adult patients treated by proton therapy for brain lesions. Assessment of the equivalent dose to organs away from the target requires whole-body geometrical information. Furthermore, because the patient geometry depends on age at exposure, age-dependent representations are also needed. We implemented age-dependent phantoms into our proton Monte Carlo dose calculation environment. We considered eight typical radiation fields, two of which had been previously used to treat pediatric patients. The other six fields were additionally considered to allow a systematic study of equivalent doses as a function of field parameters. For all phantoms and all fields, we simulated organ-specific equivalent neutron doses and analyzed for each organ (1) the equivalent dose due to neutrons as a function of distance to the target; (2) the equivalent dose due to neutrons as a function of patient age; (3) the equivalent dose due to neutrons as a function of field parameters; and (4) the ratio of contributions to secondary dose from the treatment head versus the contribution from the patient's body tissues. This work reports organ-specific equivalent neutron doses for up to 48 organs in a patient. We demonstrate quantitatively how organ equivalent doses for adult and pediatric patients vary as a function of patient's age, organ and field parameters. Neutron doses increase with increasing range and modulation width but decrease with field size (as defined by the aperture). We analyzed the ratio of neutron dose contributions from the patient and from the treatment head, and found that neutron-equivalent doses fall off rapidly as a function of distance from the target, in agreement with experimental data. It appears that for the fields used in this study, the neutron dose lateral to the field is smaller than the reported scattered photon doses in a typical intensity-modulated photon treatment. Most importantly, our study shows that neutron doses to specific organs depend considerably on the patient's age and body stature. The younger the patient, the higher the dose deposited due to neutrons. Given the fact that the risk also increases with decreasing patient age, this factor needs to be taken into account when treating pediatric patients of very young ages and/or of small body size. The neutron dose from a course of proton therapy treatment (assuming 70 Gy in 30 fractions) could potentially (depending on patient's age, organ, treatment site and area of CT scan) be equivalent to up to ~30 CT scans.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Demez, N; Lee, T; Keppel, Cynthia
Purpose: To verify calculated water equivalent thickness (WET) and water equivalent spreadness (WES) in various tissue equivalent media for proton therapy Methods: Water equivalent thicknesses (WET) of tissue equivalent materials have been calculated using the Bragg-Kleeman rule. Lateral spreadness and fluence reduction of proton beams both in those media were calculated using proton loss model (PLM) algorithm. In addition, we calculated lateral spreadness ratios with respect to that in water at the same WET depth and so the WES was defined. The WETs of those media for different proton beam energies were measured using MLIC (Multi-Layered Ionization Chamber). Also, fluencemore » and field sizes in those materials of various thicknesses were measured with ionization chambers and films Results: Calculated WETs are in agreement with measured WETs within 0.5%. We found that water equivalent spreadness (WES) is constant and the fluence and field size measurements verify that fluence can be estimated using the concept of WES. Conclusions: Calculation of WET based on the Bragg-Kleeman rule as well as the constant WES of proton beams for tissue equivalent phantoms can be used to predict fluence and field sizes at the depths of interest both in tissue equivalent media accurately for clinically available protonenergies.« less
NASA Astrophysics Data System (ADS)
Frič, Roman; Papčo, Martin
2017-12-01
Stressing a categorical approach, we continue our study of fuzzified domains of probability, in which classical random events are replaced by measurable fuzzy random events. In operational probability theory (S. Bugajski) classical random variables are replaced by statistical maps (generalized distribution maps induced by random variables) and in fuzzy probability theory (S. Gudder) the central role is played by observables (maps between probability domains). We show that to each of the two generalized probability theories there corresponds a suitable category and the two resulting categories are dually equivalent. Statistical maps and observables become morphisms. A statistical map can send a degenerated (pure) state to a non-degenerated one —a quantum phenomenon and, dually, an observable can map a crisp random event to a genuine fuzzy random event —a fuzzy phenomenon. The dual equivalence means that the operational probability theory and the fuzzy probability theory coincide and the resulting generalized probability theory has two dual aspects: quantum and fuzzy. We close with some notes on products and coproducts in the dual categories.
Precise SAR measurements in the near-field of RF antenna systems
NASA Astrophysics Data System (ADS)
Hakim, Bandar M.
Wireless devices must meet specific safety radiation limits, and in order to assess the health affects of such devices, standard procedures are used in which standard phantoms, tissue-equivalent liquids, and miniature electric field probes are used. The accuracy of such measurements depend on the precision in measuring the dielectric properties of the tissue-equivalent liquids and the associated calibrations of the electric-field probes. This thesis describes work on the theoretical modeling and experimental measurement of the complex permittivity of tissue-equivalent liquids, and associated calibration of miniature electric-field probes. The measurement method is based on measurements of the field attenuation factor and power reflection coefficient of a tissue-equivalent sample. A novel method, to the best of the authors knowledge, for determining the dielectric properties and probe calibration factors is described and validated. The measurement system is validated using saline at different concentrations, and measurements of complex permittivity and calibration factors have been made on tissue-equivalent liquids at 900MHz and 1800MHz. Uncertainty analysis have been conducted to study the measurement system sensitivity. Using the same waveguide to measure tissue-equivalent permittivity and calibrate e-field probes eliminates a source of uncertainty associated with using two different measurement systems. The measurement system is used to test GSM cell-phones at 900MHz and 1800MHz for Specific Absorption Rate (SAR) compliance using a Specific Anthropomorphic Mannequin phantom (SAM).
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2002-01-01
Two new equivalent linearization implementations for geometrically nonlinear random vibrations are presented. Both implementations are based upon a novel approach for evaluating the nonlinear stiffness within commercial finite element codes and are suitable for use with any finite element code having geometrically nonlinear static analysis capabilities. The formulation includes a traditional force-error minimization approach and a relatively new version of a potential energy-error minimization approach, which has been generalized for multiple degree-of-freedom systems. Results for a simply supported plate under random acoustic excitation are presented and comparisons of the displacement root-mean-square values and power spectral densities are made with results from a nonlinear time domain numerical simulation.
Formation, dissolution and properties of surface nanobubbles
NASA Astrophysics Data System (ADS)
Che, Zhizhao; Theodorakis, Panagiotis E.
2017-02-01
Surface nanobubbles are stable gaseous phases in liquids that form on solid substrates. While their existence has been confirmed, there are many open questions related to their formation and dissolution processes along with their structures and properties, which are difficult to investigate experimentally. To address these issues, we carried out molecular dynamics simulations based on atomistic force fields for systems comprised of water, air (N2 and O2), and a Highly Oriented Pyrolytic Graphite (HOPG) substrate. Our results provide insights into the formation/dissolution mechanisms of nanobubbles and estimates for their density, contact angle, and surface tension. We found that the formation of nanobubbles is driven by an initial nucleation process of air molecules and the subsequent coalescence of the formed air clusters. The clusters form favorably on the substrate, which provides an enhanced stability to the clusters. In contrast, nanobubbles formed in the bulk either move randomly to the substrate and spread or move to the water--air surface and pop immediately. Moreover, nanobubbles consist of a condensed gaseous phase with a surface tension smaller than that of an equivalent system under atmospheric conditions, and contact angles larger than those in the equivalent nanodroplet case. We anticipate that this study will provide useful insights into the physics of nanobubbles and will stimulate further research in the field by using all-atom simulations.
Implementation issues of the nearfield equivalent source imaging microphone array
NASA Astrophysics Data System (ADS)
Bai, Mingsian R.; Lin, Jia-Hong; Tseng, Chih-Wen
2011-01-01
This paper revisits a nearfield microphone array technique termed nearfield equivalent source imaging (NESI) proposed previously. In particular, various issues concerning the implementation of the NESI algorithm are examined. The NESI can be implemented in both the time domain and the frequency domain. Acoustical variables including sound pressure, particle velocity, active intensity and sound power are calculated by using multichannel inverse filters. Issues concerning sensor deployment are also investigated for the nearfield array. The uniform array outperformed a random array previously optimized for far-field imaging, which contradicts the conventional wisdom in far-field arrays. For applications in which only a patch array with scarce sensors is available, a virtual microphone approach is employed to ameliorate edge effects using extrapolation and to improve imaging resolution using interpolation. To enhance the processing efficiency of the time-domain NESI, an eigensystem realization algorithm (ERA) is developed. Several filtering methods are compared in terms of computational complexity. Significant saving on computations can be achieved using ERA and the frequency-domain NESI, as compared to the traditional method. The NESI technique was also experimentally validated using practical sources including a 125 cc scooter and a wooden box model with a loudspeaker fitted inside. The NESI technique proved effective in identifying broadband and non-stationary sources produced by the sources.
Quantum currents and pair correlation of electrons in a chain of localized dots
NASA Astrophysics Data System (ADS)
Morawetz, Klaus
2017-03-01
The quantum transport of electrons in a wire of localized dots by hopping, interaction and dissipation is calculated and a representation by an equivalent RCL circuit is found. The exact solution for the electric-field induced currents allows to discuss the role of virtual currents to decay initial correlations and Bloch oscillations. The dynamical response function in random phase approximation (RPA) is calculated analytically with the help of which the static structure function and pair correlation function are determined. The pair correlation function contains a form factor from the Brillouin zone and a structure factor caused by the localized dots in the wire.
Kiluk, Brian D.; Sugarman, Dawn E.; Nich, Charla; Gibbons, Carly J.; Martino, Steve; Rounsaville, Bruce J.; Carroll, Kathleen M.
2013-01-01
Objective Computer-assisted therapies offer a novel, cost-effective strategy for providing evidence-based therapies to a broad range of individuals with psychiatric disorders. However, the extent to which the growing body of randomized trials evaluating computer-assisted therapies meets current standards of methodological rigor for evidence-based interventions is not clear. Method A methodological analysis of randomized clinical trials of computer-assisted therapies for adult psychiatric disorders, published between January 1990 and January 2010, was conducted. Seventy-five studies that examined computer-assisted therapies for a range of axis I disorders were evaluated using a 14-item methodological quality index. Results Results indicated marked heterogeneity in study quality. No study met all 14 basic quality standards, and three met 13 criteria. Consistent weaknesses were noted in evaluation of treatment exposure and adherence, rates of follow-up assessment, and conformity to intention-to-treat principles. Studies utilizing weaker comparison conditions (e.g., wait-list controls) had poorer methodological quality scores and were more likely to report effects favoring the computer-assisted condition. Conclusions While several well-conducted studies have indicated promising results for computer-assisted therapies, this emerging field has not yet achieved a level of methodological quality equivalent to those required for other evidence-based behavioral therapies or pharmacotherapies. Adoption of more consistent standards for methodological quality in this field, with greater attention to potential adverse events, is needed before computer-assisted therapies are widely disseminated or marketed as evidence based. PMID:21536689
Lyapounov Functions of Closed Cone Fields: From Conley Theory to Time Functions
NASA Astrophysics Data System (ADS)
Bernard, Patrick; Suhr, Stefan
2018-03-01
We propose a theory "à la Conley" for cone fields using a notion of relaxed orbits based on cone enlargements, in the spirit of space time geometry. We work in the setting of closed (or equivalently semi-continuous) cone fields with singularities. This setting contains (for questions which are parametrization independent such as the existence of Lyapounov functions) the case of continuous vector-fields on manifolds, of differential inclusions, of Lorentzian metrics, and of continuous cone fields. We generalize to this setting the equivalence between stable causality and the existence of temporal functions. We also generalize the equivalence between global hyperbolicity and the existence of a steep temporal function.
NASA Astrophysics Data System (ADS)
Athar, Basit S.; Paganetti, Harald
2009-08-01
In this work we have simulated the absorbed equivalent doses to various organs distant to the field edge assuming proton therapy treatments of brain or spine lesions. We have used computational whole-body (gender-specific and age-dependent) voxel phantoms and considered six treatment fields with varying treatment volumes and depths. The maximum neutron equivalent dose to organs near the field edge was found to be approximately 8 mSv Gy-1. We were able to clearly demonstrate that organ-specific neutron equivalent doses are age (stature) dependent. For example, assuming an 8-year-old patient, the dose to brain from the spinal fields ranged from 0.04 to 0.10 mSv Gy-1, whereas the dose to the brain assuming a 9-month-old patient ranged from 0.5 to 1.0 mSv Gy-1. Further, as the field aperture opening increases, the secondary neutron equivalent dose caused by the treatment head decreases, while the secondary neutron equivalent dose caused by the patient itself increases. To interpret the dosimetric data, we analyzed second cancer incidence risks for various organs as a function of patient age and field size based on two risk models. The results show that, for example, in an 8-year-old female patient treated with a spinal proton therapy field, breasts, lungs and rectum have the highest radiation-induced lifetime cancer incidence risks. These are estimated to be 0.71%, 1.05% and 0.60%, respectively. For an 11-year-old male patient treated with a spinal field, bronchi and rectum show the highest risks of 0.32% and 0.43%, respectively. Risks for male and female patients increase as their age at treatment time decreases.
Asymptotic Equivalence of Probability Measures and Stochastic Processes
NASA Astrophysics Data System (ADS)
Touchette, Hugo
2018-03-01
Let P_n and Q_n be two probability measures representing two different probabilistic models of some system (e.g., an n-particle equilibrium system, a set of random graphs with n vertices, or a stochastic process evolving over a time n) and let M_n be a random variable representing a "macrostate" or "global observable" of that system. We provide sufficient conditions, based on the Radon-Nikodym derivative of P_n and Q_n, for the set of typical values of M_n obtained relative to P_n to be the same as the set of typical values obtained relative to Q_n in the limit n→ ∞. This extends to general probability measures and stochastic processes the well-known thermodynamic-limit equivalence of the microcanonical and canonical ensembles, related mathematically to the asymptotic equivalence of conditional and exponentially-tilted measures. In this more general sense, two probability measures that are asymptotically equivalent predict the same typical or macroscopic properties of the system they are meant to model.
NASA Astrophysics Data System (ADS)
Ni, Yong; He, Linghui; Khachaturyan, Armen G.
2010-07-01
A phase field method is proposed to determine the equilibrium fields of a magnetoelectroelastic multiferroic with arbitrarily distributed constitutive constants under applied loadings. This method is based on a developed generalized Eshelby's equivalency principle, in which the elastic strain, electrostatic, and magnetostatic fields at the equilibrium in the original heterogeneous system are exactly the same as those in an equivalent homogeneous magnetoelectroelastic coupled or uncoupled system with properly chosen distributed effective eigenstrain, polarization, and magnetization fields. Finding these effective fields fully solves the equilibrium elasticity, electrostatics, and magnetostatics in the original heterogeneous multiferroic. The paper formulates a variational principle proving that the effective fields are minimizers of appropriate close-form energy functional. The proposed phase field approach produces the energy minimizing effective fields (and thus solving the general multiferroic problem) as a result of artificial relaxation process described by the Ginzburg-Landau-Khalatnikov kinetic equations.
Coherent states field theory in supramolecular polymer physics
NASA Astrophysics Data System (ADS)
Fredrickson, Glenn H.; Delaney, Kris T.
2018-05-01
In 1970, Edwards and Freed presented an elegant representation of interacting branched polymers that resembles the coherent states (CS) formulation of second-quantized field theory. This CS polymer field theory has been largely overlooked during the intervening period in favor of more conventional "auxiliary field" (AF) interacting polymer representations that form the basis of modern self-consistent field theory (SCFT) and field-theoretic simulation approaches. Here we argue that the CS representation provides a simpler and computationally more efficient framework than the AF approach for broad classes of reversibly bonding polymers encountered in supramolecular polymer science. The CS formalism is reviewed, initially for a simple homopolymer solution, and then extended to supramolecular polymers capable of forming reversible linkages and networks. In the context of the Edwards model of a non-reacting homopolymer solution and one and two-component models of telechelic reacting polymers, we discuss the structure of CS mean-field theory, including the equivalence to SCFT, and show how weak-amplitude expansions (random phase approximations) can be readily developed without explicit enumeration of all reaction products in a mixture. We further illustrate how to analyze CS field theories beyond SCFT at the level of Gaussian field fluctuations and provide a perspective on direct numerical simulations using a recently developed complex Langevin technique.
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Geng, Lin; Zhang, Xiao-Zheng
2016-05-01
In the sound field with multiple non-stationary sources, the measured pressure is the sum of the pressures generated by all sources, and thus cannot be used directly for studying the vibration and sound radiation characteristics of every source alone. This paper proposes a separation model based on the interpolated time-domain equivalent source method (ITDESM) to separate the pressure field belonging to every source from the non-stationary multi-source sound field. In the proposed method, ITDESM is first extended to establish the relationship between the mixed time-dependent pressure and all the equivalent sources distributed on every source with known location and geometry information, and all the equivalent source strengths at each time step are solved by an iterative solving process; then, the corresponding equivalent source strengths of one interested source are used to calculate the pressure field generated by that source alone. Numerical simulation of two baffled circular pistons demonstrates that the proposed method can be effective in separating the non-stationary pressure generated by every source alone in both time and space domains. An experiment with two speakers in a semi-anechoic chamber further evidences the effectiveness of the proposed method.
Sirriyeh, Reema; Lawton, Rebecca; Ward, Jane
2010-11-01
The present study attempts to develop and pilot the feasibility and efficacy of a novel intervention using affective messages as a strategy to increase physical activity (PA) levels in adolescents. Design An exploratory pilot randomized control trial was used to compare behaviour change over 2 weeks. A modified form of the International Physical Activity Questionnaire was used to assess PA behaviour. A total of 120 adolescents (16-19 years) from 4 sixth forms in West Yorkshire completed the field-based study. Participants were randomly assigned to one of three experimental conditions, or the control condition (N=28). Participants in experimental conditions received 1 short messaging service (SMS) text message per day over the 2 weeks, which included manipulations of either affective beliefs (enjoyable/unenjoyable; N=31), instrumental beliefs (beneficial/harmful; N=30), or a combination of these (N=31). Control participants received one SMS text message per week. Outcomes were measured at baseline and at the end of the 2 week intervention. PA levels increased by the equivalent of 31.5 minutes of moderate (four metabolic equivalent) activity per week during the study. Main effects of condition (p=.049), and current physical activity level (p=.002) were identified, along with a significant interaction between condition and current activity level (p=.006). However, when the sample was split at baseline into active and inactive participants, a main effect of condition remained for inactive participants only (p=.001). Post hoc analysis revealed that inactive participants who received messages targeting affective beliefs increased their activity levels significantly more than the instrumental (p=.012), combined (p=.002), and control groups (p=.018). Strategies based on affective associations may be more effective for increasing PA levels in inactive individuals.
10 CFR 431.383 - Enforcement process for electric motors.
Code of Federal Regulations, 2014 CFR
2014-01-01
... general purpose electric motor of equivalent electrical design and enclosure rather than replacing the... equivalent electrical design and enclosure rather than machining and attaching an endshield. ... sample of up to 20 units will then be randomly selected from one or more subdivided groups within the...
Do Arthroscopic Fluid Pumps Display True Surgical Site Pressure During Hip Arthroscopy?
Ross, Jeremy A; Marland, Jennifer D; Payne, Brayden; Whiting, Daniel R; West, Hugh S
2018-01-01
To report on the accuracy of 5 commercially available arthroscopic fluid pumps to measure fluid pressure at the surgical site during hip arthroscopy. Patients undergoing hip arthroscopy for femoroacetabular impingement were block randomized to the use of 1 of 5 arthroscopic fluid pumps. A spinal needle inserted into the operative field was used to measure surgical site pressure. Displayed pump pressures and surgical site pressures were recorded at 30-second intervals for the duration of the case. Mean differences between displayed pump pressures and surgical site pressures were obtained for each pump group. Of the 5 pumps studied, 3 (Crossflow, 24K, and Continuous Wave III) reflected the operative field fluid pressure within 11 mm Hg of the pressure readout. In contrast, 2 of the 5 pumps (Double Pump RF and FMS/DUO+) showed a difference of greater than 59 mm Hg between the operative field fluid pressure and the pressure readout. Joint-calibrated pumps more closely reflect true surgical site pressure than gravity-equivalent pumps. With a basic understanding of pump design, either type of pump can be used safely and efficiently. The risk of unfamiliarity with these differences is, on one end, the possibility of pump underperformance and, on the other, potentially dangerously high operating pressures. Level II, prospective block-randomized study. Copyright © 2017. Published by Elsevier Inc.
Generalization of one-dimensional solute transport: A stochastic-convective flow conceptualization
NASA Astrophysics Data System (ADS)
Simmons, C. S.
1986-04-01
A stochastic-convective representation of one-dimensional solute transport is derived. It is shown to conceptually encompass solutions of the conventional convection-dispersion equation. This stochastic approach, however, does not rely on the assumption that dispersive flux satisfies Fick's diffusion law. Observable values of solute concentration and flux, which together satisfy a conservation equation, are expressed as expectations over a flow velocity ensemble, representing the inherent random processess that govern dispersion. Solute concentration is determined by a Lagrangian pdf for random spatial displacements, while flux is determined by an equivalent Eulerian pdf for random travel times. A condition for such equivalence is derived for steady nonuniform flow, and it is proven that both Lagrangian and Eulerian pdfs are required to account for specified initial and boundary conditions on a global scale. Furthermore, simplified modeling of transport is justified by proving that an ensemble of effectively constant velocities always exists that constitutes an equivalent representation. An example of how a two-dimensional transport problem can be reduced to a single-dimensional stochastic viewpoint is also presented to further clarify concepts.
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-01-01
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects. PMID:25768061
Eley, John; Newhauser, Wayne; Homann, Kenneth; Howell, Rebecca; Schneider, Christopher; Durante, Marco; Bert, Christoph
2015-03-11
Equivalent dose from neutrons produced during proton radiotherapy increases the predicted risk of radiogenic late effects. However, out-of-field neutron dose is not taken into account by commercial proton radiotherapy treatment planning systems. The purpose of this study was to demonstrate the feasibility of implementing an analytical model to calculate leakage neutron equivalent dose in a treatment planning system. Passive scattering proton treatment plans were created for a water phantom and for a patient. For both the phantom and patient, the neutron equivalent doses were small but non-negligible and extended far beyond the therapeutic field. The time required for neutron equivalent dose calculation was 1.6 times longer than that required for proton dose calculation, with a total calculation time of less than 1 h on one processor for both treatment plans. Our results demonstrate that it is feasible to predict neutron equivalent dose distributions using an analytical dose algorithm for individual patients with irregular surfaces and internal tissue heterogeneities. Eventually, personalized estimates of neutron equivalent dose to organs far from the treatment field may guide clinicians to create treatment plans that reduce the risk of late effects.
A stochastic-dynamic model for global atmospheric mass field statistics
NASA Technical Reports Server (NTRS)
Ghil, M.; Balgovind, R.; Kalnay-Rivas, E.
1981-01-01
A model that yields the spatial correlation structure of atmospheric mass field forecast errors was developed. The model is governed by the potential vorticity equation forced by random noise. Expansion in spherical harmonics and correlation function was computed analytically using the expansion coefficients. The finite difference equivalent was solved using a fast Poisson solver and the correlation function was computed using stratified sampling of the individual realization of F(omega) and hence of phi(omega). A higher order equation for gamma was derived and solved directly in finite differences by two successive applications of the fast Poisson solver. The methods were compared for accuracy and efficiency and the third method was chosen as clearly superior. The results agree well with the latitude dependence of observed atmospheric correlation data. The value of the parameter c sub o which gives the best fit to the data is close to the value expected from dynamical considerations.
Rayleigh's hypothesis and the geometrical optics limit.
Elfouhaily, Tanos; Hahn, Thomas
2006-09-22
The Rayleigh hypothesis (RH) is often invoked in the theoretical and numerical treatment of rough surface scattering in order to decouple the analytical form of the scattered field. The hypothesis stipulates that the scattered field away from the surface can be extended down onto the rough surface even though it is formed by solely up-going waves. Traditionally this hypothesis is systematically used to derive the Volterra series under the small perturbation method which is equivalent to the low-frequency limit. In this Letter we demonstrate that the RH also carries the high-frequency or the geometrical optics limit, at least to first order. This finding has never been explicitly derived in the literature. Our result comforts the idea that the RH might be an exact solution under some constraints in the general case of random rough surfaces and not only in the case of small-slope deterministic periodic gratings.
Theory of Stochastic Laplacian Growth
NASA Astrophysics Data System (ADS)
Alekseev, Oleg; Mineev-Weinstein, Mark
2017-07-01
We generalize the diffusion-limited aggregation by issuing many randomly-walking particles, which stick to a cluster at the discrete time unit providing its growth. Using simple combinatorial arguments we determine probabilities of different growth scenarios and prove that the most probable evolution is governed by the deterministic Laplacian growth equation. A potential-theoretical analysis of the growth probabilities reveals connections with the tau-function of the integrable dispersionless limit of the two-dimensional Toda hierarchy, normal matrix ensembles, and the two-dimensional Dyson gas confined in a non-uniform magnetic field. We introduce the time-dependent Hamiltonian, which generates transitions between different classes of equivalence of closed curves, and prove the Hamiltonian structure of the interface dynamics. Finally, we propose a relation between probabilities of growth scenarios and the semi-classical limit of certain correlation functions of "light" exponential operators in the Liouville conformal field theory on a pseudosphere.
Oliver E. Buckley Condensed Matter Prize: Emergent gravity from interacting Majorana modes
NASA Astrophysics Data System (ADS)
Kitaev, Alexei
I will describe a concrete many-body Hamiltonian that exhibits some features of a quantum black hole. The Sachdev-Ye-Kitaev model is a system of N >> 1 Majorana modes that are all coupled by random 4-th order terms. The problem admits an approximate dynamic mean field solution. At low temperatures, there is a fluctuating collective mode that corresponds to reparametrization of time. The effective action for this mode is equivalent to dilaton gravity in two space-time dimensions. Some important questions are how to quantize the reparametrization mode in Lorentzian time, include dissipative effects, and understand this system from the quantum information perspective. Supported by the Simons Foundation, Award Number 376205.
Spherical earth gravity and magnetic anomaly analysis by equivalent point source inversion
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.
1981-01-01
To facilitate geologic interpretation of satellite elevation potential field data, analysis techniques are developed and verified in the spherical domain that are commensurate with conventional flat earth methods of potential field interpretation. A powerful approach to the spherical earth problem relates potential field anomalies to a distribution of equivalent point sources by least squares matrix inversion. Linear transformations of the equivalent source field lead to corresponding geoidal anomalies, pseudo-anomalies, vector anomaly components, spatial derivatives, continuations, and differential magnetic pole reductions. A number of examples using 1 deg-averaged surface free-air gravity anomalies of POGO satellite magnetometer data for the United States, Mexico, and Central America illustrate the capabilities of the method.
NASA Astrophysics Data System (ADS)
Kry, Stephen
Introduction. External beam photon radiotherapy is a common treatment for many malignancies, but results in the exposure of the patient to radiation away from the treatment site. This out-of-field radiation irradiates healthy tissue and may lead to the induction of secondary malignancies. Out-of-field radiation is composed of photons and, at high treatment energies, neutrons. Measurement of this out-of-field dose is time consuming, often difficult, and is specific to the conditions of the measurements. Monte Carlo simulations may be a viable approach to determining the out-of-field dose quickly, accurately, and for arbitrary irradiation conditions. Methods. An accelerator head, gantry, and treatment vault were modeled with MCNPX and 6 MV and 18 MV beams were simulated. Photon doses were calculated in-field and compared to measurements made with an ion chamber in a water tank. Photon doses were also calculated out-of-field from static fields and compared to measurements made with thermoluminescent dosimeters in acrylic. Neutron fluences were calculated and compared to measurements made with gold foils. Finally, photon and neutron dose equivalents were calculated in an anthropomorphic phantom following intensity-modulated radiation therapy and compared to previously published dose equivalents. Results. The Monte Carlo model was able to accurately calculate the in-field dose. From static treatment fields, the model was also able to calculate the out-of-field photon dose within 16% at 6 MV and 17% at 18 MV and the neutron fluence within 19% on average. From the simulated IMRT treatments, the calculated out-of-field photon dose was within 14% of measurement at 6 MV and 13% at 18 MV on average. The calculated neutron dose equivalent was much lower than the measured value but is likely accurate because the measured neutron dose equivalent was based on an overestimated neutron energy. Based on the calculated out-of-field doses generated by the Monte Carlo model, it was possible to estimate the risk of fatal secondary malignancy, which was consistent with previous estimates except for the neutron discrepancy. Conclusions. The Monte Carlo model developed here is well suited to studying the out-of-field dose equivalent from photons and neutrons under a variety of irradiation configurations, including complex treatments on complex phantoms. Based on the calculated dose equivalents, it is possible to estimate the risk of secondary malignancy associated with out-of-field doses. The Monte Carlo model should be used to study, quantify, and minimize the out-of-field dose equivalent and associated risks received by patients undergoing radiation therapy.
Han, Jijun; Yang, Deqiang; Sun, Houjun; Xin, Sherman Xuegang
2017-01-01
Inverse method is inherently suitable for calculating the distribution of source current density related with an irregularly structured electromagnetic target field. However, the present form of inverse method cannot calculate complex field-tissue interactions. A novel hybrid inverse/finite-difference time domain (FDTD) method that can calculate the complex field-tissue interactions for the inverse design of source current density related with an irregularly structured electromagnetic target field is proposed. A Huygens' equivalent surface is established as a bridge to combine the inverse and FDTD method. Distribution of the radiofrequency (RF) magnetic field on the Huygens' equivalent surface is obtained using the FDTD method by considering the complex field-tissue interactions within the human body model. The obtained magnetic field distributed on the Huygens' equivalent surface is regarded as the next target. The current density on the designated source surface is derived using the inverse method. The homogeneity of target magnetic field and specific energy absorption rate are calculated to verify the proposed method.
Poletti, Mark A; Betlehem, Terence; Abhayapala, Thushara D
2014-07-01
Higher order sound sources of Nth order can radiate sound with 2N + 1 orthogonal radiation patterns, which can be represented as phase modes or, equivalently, amplitude modes. This paper shows that each phase mode response produces a spiral wave front with a different spiral rate, and therefore a different direction of arrival of sound. Hence, for a given receiver position a higher order source is equivalent to a linear array of 2N + 1 monopole sources. This interpretation suggests performance similar to a circular array of higher order sources can be produced by an array of sources, each of which consists of a line array having monopoles at the apparent source locations of the corresponding phase modes. Simulations of higher order arrays and arrays of equivalent line sources are presented. It is shown that the interior fields produced by the two arrays are essentially the same, but that the exterior fields differ because the higher order sources produces different equivalent source locations for field positions outside the array. This work provides an explanation of the fact that an array of L Nth order sources can reproduce sound fields whose accuracy approaches the performance of (2N + 1)L monopoles.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2017-03-01
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Formation, dissolution and properties of surface nanobubbles.
Che, Zhizhao; Theodorakis, Panagiotis E
2017-02-01
Surface nanobubbles are stable gaseous phases in liquids that form on solid substrates. While their existence has been confirmed, there are many open questions related to their formation and dissolution processes along with their structures and properties, which are difficult to investigate experimentally. To address these issues, we carried out molecular dynamics simulations based on atomistic force fields for systems comprised of water, air (N 2 and O 2 ), and a Highly Oriented Pyrolytic Graphite (HOPG) substrate. Our results provide insights into the formation/dissolution mechanisms of nanobubbles and estimates for their density, contact angle, and surface tension. We found that the formation of nanobubbles is driven by an initial nucleation process of air molecules and the subsequent coalescence of the formed air clusters. The clusters form favorably on the substrate, which provides an enhanced stability to the clusters. In contrast, nanobubbles formed in the bulk either move randomly to the substrate and spread or move to the water-air surface and pop immediately. Moreover, nanobubbles consist of a condensed gaseous phase with a surface tension smaller than that of an equivalent system under atmospheric conditions, and contact angles larger than those in the equivalent nanodroplet case. We anticipate that this study will provide useful insights into the physics of nanobubbles and will stimulate further research in the field by using all-atom simulations. Copyright © 2016 Elsevier Inc. All rights reserved.
Arithmetic Practice Can Be Modified to Promote Understanding of Mathematical Equivalence
ERIC Educational Resources Information Center
McNeil, Nicole M.; Fyfe, Emily R.; Dunwiddie, April E.
2015-01-01
This experiment tested if a modified version of arithmetic practice facilitates understanding of math equivalence. Children within 2nd-grade classrooms (N = 166) were randomly assigned to practice single-digit addition facts using 1 of 2 workbooks. In the control workbook, problems were presented in the traditional "operations = answer"…
NASA Astrophysics Data System (ADS)
Campbell, J.; Dean, J.; Clyne, T. W.
2017-02-01
This study concerns a commonly-used procedure for evaluating the steady state creep stress exponent, n, from indentation data. The procedure involves monitoring the indenter displacement history under constant load and making the assumption that, once its velocity has stabilised, the system is in a quasi-steady state, with stage II creep dominating the behaviour. The stress and strain fields under the indenter are represented by "equivalent stress" and "equivalent strain rate" values. The estimate of n is then obtained as the gradient of a plot of the logarithm of the equivalent strain rate against the logarithm of the equivalent stress. Concerns have, however, been expressed about the reliability of this procedure, and indeed it has already been shown to be fundamentally flawed. In the present paper, it is demonstrated, using a very simple analysis, that, for a genuinely stable velocity, the procedure always leads to the same, constant value for n (either 1.0 or 0.5, depending on whether the tip shape is spherical or self-similar). This occurs irrespective of the value of the measured velocity, or indeed of any creep characteristic of the material. It is now clear that previously-measured values of n, obtained using this procedure, have varied in a more or less random fashion, depending on the functional form chosen to represent the displacement-time history and the experimental variables (tip shape and size, penetration depth, etc.), with little or no sensitivity to the true value of n.
Isacsson, Göran; Nohlert, Eva; Fransson, Anette M C; Bornefalk-Hermansson, Anna; Wiman Eriksson, Eva; Ortlieb, Eva; Trepp, Livia; Avdelius, Anna; Sturebrand, Magnus; Fodor, Clara; List, Thomas; Schumann, Mohamad; Tegelberg, Åke
2018-05-16
The clinical benefit of bibloc over monobloc appliances in treating obstructive sleep apnoea (OSA) has not been evaluated in randomized trials. We hypothesized that the two types of appliances are equally effective in treating OSA. To compare the efficacy of monobloc versus bibloc appliances in a short-term perspective. In this multicentre, randomized, blinded, controlled, parallel-group equivalence trial, patients with OSA were randomly assigned to use either a bibloc or a monobloc appliance. One-night respiratory polygraphy without respiratory support was performed at baseline, and participants were re-examined with the appliance in place at short-term follow-up. The primary outcome was the change in the apnoea-hypopnea index (AHI). An independent person prepared a randomization list and sealed envelopes. Evaluating dentist and the biomedical analysts who evaluated the polygraphy were blinded to the choice of therapy. Of 302 patients, 146 were randomly assigned to use the bibloc and 156 the monobloc device; 123 and 139 patients, respectively, were analysed as per protocol. The mean changes in AHI were -13.8 (95% confidence interval -16.1 to -11.5) in the bibloc group and -12.5 (-14.8 to -10.3) in the monobloc group. The difference of -1.3 (-4.5 to 1.9) was significant within the equivalence interval (P = 0.011; the greater of the two P values) and was confirmed by the intention-to-treat analysis (P = 0.001). The adverse events were of mild character and were experienced by similar percentages of patients in both groups (39 and 40 per cent for the bibloc and monobloc group, respectively). The study shows short-term results with a median time from commencing treatment to the evaluation visit of 56 days and long-term data on efficacy and harm are needed to be fully conclusive. In a short-term perspective, both appliances were equivalent in terms of their positive effects for treating OSA and caused adverse events of similar magnitude. Registered with ClinicalTrials.gov (#NCT02148510).
NASA Astrophysics Data System (ADS)
Angeli, Andrea; Cornelis, Bram; Troncossi, Marco
2018-03-01
In many real life environments, mechanical and electronic systems are subjected to vibrations that may induce dynamic loads and potentially lead to an early failure due to fatigue damage. Thus, qualification tests by means of shakers are advisable for the most critical components in order to verify their durability throughout the entire life cycle. Nowadays the trend is to tailor the qualification tests according to the specific application of the tested component, considering the measured field data as reference to set up the experimental campaign, for example through the so called "Mission Synthesis" methodology. One of the main issues is to define the excitation profiles for the tests, that must have, besides the (potentially scaled) frequency content, also the same damage potential of the field data despite being applied for a limited duration. With this target, the current procedures generally provide the test profile as a stationary random vibration specified by a Power Spectral Density (PSD). In certain applications this output may prove inadequate to represent the nature of the reference signal, and the procedure could result in an unrealistic qualification test. For instance when a rotating part is present in the system the component under analysis may be subjected to Sine-on-Random (SoR) vibrations, namely excitations composed of sinusoidal contributions superimposed to random vibrations. In this case, the synthesized test profile should preserve not only the induced fatigue damage but also the deterministic components of the environmental vibration. In this work, the potential advantages of a novel procedure to synthesize SoR profiles instead of PSDs for qualification tests are presented and supported by the results of an experimental campaign.
High School Equivalency Assessment and Recognition in the United States: An Eyewitness Account
ERIC Educational Resources Information Center
McLendon, Lennox
2017-01-01
This chapter on high school equivalency describes recent events involved in updating the adult education high school equivalency assessment services and the entrance of additional assessments into the field.
NASA Astrophysics Data System (ADS)
Gjetvaj, Filip; Russian, Anna; Gouze, Philippe; Dentz, Marco
2015-10-01
Both flow field heterogeneity and mass transfer between mobile and immobile domains have been studied separately for explaining observed anomalous transport. Here we investigate non-Fickian transport using high-resolution 3-D X-ray microtomographic images of Berea sandstone containing microporous cement with pore size below the setup resolution. Transport is computed for a set of representative elementary volumes and results from advection and diffusion in the resolved macroporosity (mobile domain) and diffusion in the microporous phase (immobile domain) where the effective diffusion coefficient is calculated from the measured local porosity using a phenomenological model that includes a porosity threshold (ϕθ) below which diffusion is null and the exponent n that characterizes tortuosity-porosity power-law relationship. We show that both flow field heterogeneity and microporosity trigger anomalous transport. Breakthrough curve (BTC) tailing is positively correlated to microporosity volume and mobile-immobile interface area. The sensitivity analysis showed that the BTC tailing increases with the value of ϕθ, due to the increase of the diffusion path tortuosity until the volume of the microporosity becomes negligible. Furthermore, increasing the value of n leads to an increase in the standard deviation of the distribution of effective diffusion coefficients, which in turn results in an increase of the BTC tailing. Finally, we propose a continuous time random walk upscaled model where the transition time is the sum of independently distributed random variables characterized by specific distributions. It allows modeling a 1-D equivalent macroscopic transport honoring both the control of the flow field heterogeneity and the multirate mass transfer between mobile and immobile domains.
Continuous time quantum random walks in free space
NASA Astrophysics Data System (ADS)
Eichelkraut, Toni; Vetter, Christian; Perez-Leija, Armando; Christodoulides, Demetrios; Szameit, Alexander
2014-05-01
We show theoretically and experimentally that two-dimensional continuous time coherent random walks are possible in free space, that is, in the absence of any external potential, by properly tailoring the associated initial wave function. These effects are experimentally demonstrated using classical paraxial light. Evidently, the usage of classical beams to explore the dynamics of point-like quantum particles is possible since both phenomena are mathematically equivalent. This in turn makes our approach suitable for the realization of random walks using different quantum particles, including electrons and photons. To study the spatial evolution of a wavefunction theoretically, we consider the one-dimensional paraxial wave equation (i∂z +1/2 ∂x2) Ψ = 0 . Starting with the initially localized wavefunction Ψ (x , 0) = exp [ -x2 / 2σ2 ] J0 (αx) , one can show that the evolution of such Gaussian-apodized Bessel envelopes within a region of validity resembles the probability pattern of a quantum walker traversing a uniform lattice. In order to generate the desired input-field in our experimental setting we shape the amplitude and phase of a collimated light beam originating from a classical HeNe-Laser (633 nm) utilizing a spatial light modulator.
ERIC Educational Resources Information Center
Hunt, Jessica H.
2014-01-01
The purpose of this study was to examine the effects of a Tier 2 supplemental intervention focused on rational number equivalency concepts and applications on the mathematics performance of third-grade students with and without mathematics difficulties. The researcher used a pretest-posttest control group design and random assignment of 19…
ERIC Educational Resources Information Center
Kariuki, Patrick; Gentry, Christi
2010-01-01
The purpose of this study was to examine the effects of Accelerated Math utilization on students' grade equivalency scores. Twelve students for both experimental and control groups were randomly selected from 37 students enrolled in math in grades four through six. The experimental group consisted of the students who actively participated in…
Section Preequating under the Equivalent Groups Design without IRT
ERIC Educational Resources Information Center
Guo, Hongwen; Puhan, Gautam
2014-01-01
In this article, we introduce a section preequating (SPE) method (linear and nonlinear) under the randomly equivalent groups design. In this equating design, sections of Test X (a future new form) and another existing Test Y (an old form already on scale) are administered. The sections of Test X are equated to Test Y, after adjusting for the…
Hakim, B M; Beard, B B; Davis, C C
2018-01-01
Specific absorption rate (SAR) measurements require accurate calculations of the dielectric properties of tissue-equivalent liquids and associated calibration of E-field probes. We developed a precise tissue-equivalent dielectric measurement and E-field probe calibration system. The system consists of a rectangular waveguide, electric field probe, and data control and acquisition system. Dielectric properties are calculated using the field attenuation factor inside the tissue-equivalent liquid and power reflectance inside the waveguide at the air/dielectric-slab interface. Calibration factors were calculated using isotropicity measurements of the E-field probe. The frequencies used are 900 MHz and 1800 MHz. The uncertainties of the measured values are within ±3%, at the 95% confidence level. Using the same waveguide for dielectric measurements as well as calibrating E-field probes used in SAR assessments eliminates a source of uncertainty. Moreover, we clearly identified the system parameters that affect the overall uncertainty of the measurement system. PMID:29520129
A Note on Equivalence Among Various Scalar Field Models of Dark Energies
NASA Astrophysics Data System (ADS)
Mandal, Jyotirmay Das; Debnath, Ujjal
2017-08-01
In this work, we have tried to find out similarities between various available models of scalar field dark energies (e.g., quintessence, k-essence, tachyon, phantom, quintom, dilatonic dark energy, etc). We have defined an equivalence relation from elementary set theory between scalar field models of dark energies and used fundamental ideas from linear algebra to set up our model. Consequently, we have obtained mutually disjoint subsets of scalar field dark energies with similar properties and discussed our observation.
Zhang, Xiao-Zheng; Bi, Chuan-Xing; Zhang, Yong-Bin; Xu, Liang
2015-05-01
Planar near-field acoustic holography has been successfully extended to reconstruct the sound field in a moving medium, however, the reconstructed field still contains the convection effect that might lead to the wrong identification of sound sources. In order to accurately identify sound sources in a moving medium, a time-domain equivalent source method is developed. In the method, the real source is replaced by a series of time-domain equivalent sources whose strengths are solved iteratively by utilizing the measured pressure and the known convective time-domain Green's function, and time averaging is used to reduce the instability in the iterative solving process. Since these solved equivalent source strengths are independent of the convection effect, they can be used not only to identify sound sources but also to model sound radiations in both moving and static media. Numerical simulations are performed to investigate the influence of noise on the solved equivalent source strengths and the effect of time averaging on reducing the instability, and to demonstrate the advantages of the proposed method on the source identification and sound radiation modeling.
Neutron scattered dose equivalent to a fetus from proton radiotherapy of the mother.
Mesoloras, Geraldine; Sandison, George A; Stewart, Robert D; Farr, Jonathan B; Hsi, Wen C
2006-07-01
Scattered neutron dose equivalent to a representative point for a fetus is evaluated in an anthropomorphic phantom of the mother undergoing proton radiotherapy. The effect on scattered neutron dose equivalent to the fetus of changing the incident proton beam energy, aperture size, beam location, and air gap between the beam delivery snout and skin was studied for both a small field snout and a large field snout. Measurements of the fetus scattered neutron dose equivalent were made by placing a neutron bubble detector 10 cm below the umbilicus of an anthropomorphic Rando phantom enhanced by a wax bolus to simulate a second trimester pregnancy. The neutron dose equivalent in milliSieverts (mSv) per proton treatment Gray increased with incident proton energy and decreased with aperture size, distance of the fetus representative point from the field edge, and increasing air gap. Neutron dose equivalent to the fetus varied from 0.025 to 0.450 mSv per proton Gray for the small field snout and from 0.097 to 0.871 mSv per proton Gray for the large field snout. There is likely to be no excess risk to the fetus of severe mental retardation for a typical proton treatment of 80 Gray to the mother since the scattered neutron dose to the fetus of 69.7 mSv is well below the lower confidence limit for the threshold of 300 mGy observed for the occurrence of severe mental retardation in prenatally exposed Japanese atomic bomb survivors. However, based on the linear no threshold hypothesis, and this same typical treatment for the mother, the excess risk to the fetus of radiation induced cancer death in the first 10 years of life is 17.4 per 10,000 children.
ERIC Educational Resources Information Center
Menold, Natalja; Tausch, Anja
2016-01-01
Effects of rating scale forms on cross-sectional reliability and measurement equivalence were investigated. A randomized experimental design was implemented, varying category labels and number of categories. The participants were 800 students at two German universities. In contrast to previous research, reliability assessment method was used,…
NASA Astrophysics Data System (ADS)
Grayver, Alexander V.; Kuvshinov, Alexey V.
2016-05-01
This paper presents a methodology to sample equivalence domain (ED) in nonlinear partial differential equation (PDE)-constrained inverse problems. For this purpose, we first applied state-of-the-art stochastic optimization algorithm called Covariance Matrix Adaptation Evolution Strategy (CMAES) to identify low-misfit regions of the model space. These regions were then randomly sampled to create an ensemble of equivalent models and quantify uncertainty. CMAES is aimed at exploring model space globally and is robust on very ill-conditioned problems. We show that the number of iterations required to converge grows at a moderate rate with respect to number of unknowns and the algorithm is embarrassingly parallel. We formulated the problem by using the generalized Gaussian distribution. This enabled us to seamlessly use arbitrary norms for residual and regularization terms. We show that various regularization norms facilitate studying different classes of equivalent solutions. We further show how performance of the standard Metropolis-Hastings Markov chain Monte Carlo algorithm can be substantially improved by using information CMAES provides. This methodology was tested by using individual and joint inversions of magneotelluric, controlled-source electromagnetic (EM) and global EM induction data.
Vacuum Microelectronic Field Emission Array Devices for Microwave Amplification.
NASA Astrophysics Data System (ADS)
Mancusi, Joseph Edward
This dissertation presents the design, analysis, and measurement of vacuum microelectronic devices which use field emission to extract an electron current from arrays of silicon cones. The arrays of regularly-spaced silicon cones, the field emission cathodes or emitters, are fabricated with an integrated gate electrode which controls the electric field at the tip of the cone, and thus the electron current. An anode or collector electrode is placed above the array to collect the emission current. These arrays, which are fabricated in a standard silicon processing facility, are developed for use as high power microwave amplifiers. Field emission has been studied extensively since it was first characterized in 1928, however due to the large electric fields required practical field emission devices are difficult to make. With the development of the semiconductor industry came the development of fabrication equipment and techniques which allow for the manufacture of the precision micron-scale structures necessary for practical field emission devices. The active region of a field emission device is a vacuum, therefore the electron travel is ballistic. This analysis of field emission devices includes electric field and electron emission modeling, development of a device equivalent circuit, analysis of the parameters in the equivalent circuit, and device testing. Variations in device structure are taken into account using a statistical model based upon device measurements. Measurements of silicon field emitter arrays at DC and RF are presented and analyzed. In this dissertation, the equivalent circuit is developed from the analysis of the device structure. The circuit parameters are calculated from geometrical considerations and material properties, or are determined from device measurements. It is necessary to include the emitter resistance in the equivalent circuit model since relatively high resistivity silicon wafers are used. As is demonstrated, the circuit model accurately predicts the magnitude of the emission current at a number of typical bias current levels when the device is operating at frequencies within the range of 10 MHz to 1 GHz. At low frequencies and at high frequencies within this range, certain parameters are negligible, and simplifications may be made in the equivalent circuit model.
Comment on ''Equivalence between the Thirring model and a derivative-coupling model''
DOE Office of Scientific and Technical Information (OSTI.GOV)
Banerjee, R.
1988-06-15
An operator equivalence between the Thirring model and the fermionic sector of a Dirac field interacting via derivative coupling with two scalar fields is established in the path-integral framework. Relations between the coupling parameters of the two models, as found by Gomes and da Silva, can be reproduced.
Free Fall and the Equivalence Principle Revisited
ERIC Educational Resources Information Center
Pendrill, Ann-Marie
2017-01-01
Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton's law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field.…
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks
Li, Jiayin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-01-01
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs. PMID:29117152
Spatial-Temporal Data Collection with Compressive Sensing in Mobile Sensor Networks.
Zheng, Haifeng; Li, Jiayin; Feng, Xinxin; Guo, Wenzhong; Chen, Zhonghui; Xiong, Neal
2017-11-08
Compressive sensing (CS) provides an energy-efficient paradigm for data gathering in wireless sensor networks (WSNs). However, the existing work on spatial-temporal data gathering using compressive sensing only considers either multi-hop relaying based or multiple random walks based approaches. In this paper, we exploit the mobility pattern for spatial-temporal data collection and propose a novel mobile data gathering scheme by employing the Metropolis-Hastings algorithm with delayed acceptance, an improved random walk algorithm for a mobile collector to collect data from a sensing field. The proposed scheme exploits Kronecker compressive sensing (KCS) for spatial-temporal correlation of sensory data by allowing the mobile collector to gather temporal compressive measurements from a small subset of randomly selected nodes along a random routing path. More importantly, from the theoretical perspective we prove that the equivalent sensing matrix constructed from the proposed scheme for spatial-temporal compressible signal can satisfy the property of KCS models. The simulation results demonstrate that the proposed scheme can not only significantly reduce communication cost but also improve recovery accuracy for mobile data gathering compared to the other existing schemes. In particular, we also show that the proposed scheme is robust in unreliable wireless environment under various packet losses. All this indicates that the proposed scheme can be an efficient alternative for data gathering application in WSNs .
Inertia and Double Bending of Light from Equivalence
NASA Technical Reports Server (NTRS)
Shuler, Robert L., Jr.
2010-01-01
Careful examination of light paths in an accelerated reference frame, with use of Special Relativity, can account fully for the observed bending of light in a gravitational field, not just half of it as reported in 1911. This analysis also leads to a Machian formulation of inertia similar to the one proposed by Einstein in 1912 and later derived from gravitational field equations in Minkowsky Space by Sciama in 1953. There is a clear inference from equivalence that there is some type of inertial mass increase in a gravitational field. It is the purpose of the current paper to suggest that equivalence provides a more complete picture of gravitational effects than previously thought, correctly predicting full light bending, and that since the theory of inertia is derivable from equivalence, any theory based on equivalence must take account of it. Einstein himself clearly was not satisfied with the status of inertia in GRT, as our quotes have shown. Many have tried to account for inertia and met with less than success, for example Davidson s integration of Sciama s inertia into GRT but only for a steady state cosmology [10], and the Machian gravity theory of Brans and Dicke [11]. Yet Mach s idea hasn t gone away, and now it seems that it cannot go away without also disposing of equivalence.
NASA Astrophysics Data System (ADS)
Belloni, Diogo; Kroupa, Pavel; Rocha-Pinto, Helio J.; Giersz, Mirek
2018-03-01
In order to allow a better understanding of the origin of Galactic field populations, dynamical equivalence of stellar-dynamical systems has been postulated by Kroupa and Belloni et al. to allow mapping of solutions of the initial conditions of embedded clusters such that they yield, after a period of dynamical processing, the Galactic field population. Dynamically equivalent systems are defined to initially and finally have the same distribution functions of periods, mass ratios and eccentricities of binary stars. Here, we search for dynamically equivalent clusters using the MOCCA code. The simulations confirm that dynamically equivalent solutions indeed exist. The result is that the solution space is next to identical to the radius-mass relation of Marks & Kroupa, ( r_h/pc )= 0.1^{+0.07}_{-0.04} ( M_ecl/M_{⊙} )^{0.13± 0.04}. This relation is in good agreement with the oIMF. This is achieved by applying a similar procedurebserved density of molecular cloud clumps. According to the solutions, the time-scale to reach dynamical equivalence is about 0.5 Myr which is, interestingly, consistent with the lifetime of ultra-compact H II regions and the time-scale needed for gas expulsion to be active in observed very young clusters as based on their dynamical modelling.
Brown, David M; Ou, William C; Wong, Tien P; Kim, Rosa Y; Croft, Daniel E; Wykoff, Charles C
2018-05-01
To evaluate the effect of targeted retinal photocoagulation (TRP) on visual and anatomic outcomes and treatment burden in eyes with diabetic macular edema (DME). Phase I/II prospective, randomized, controlled clinical trial. Forty eyes of 29 patients with center-involved macular edema secondary to diabetes mellitus. Eyes with center-involved DME and Early Treatment Diabetic Retinopathy Study (ETDRS) best-corrected visual acuity (BCVA) between 20/32 and 20/320 (Snellen equivalent) were randomized 1:1 to monotherapy with 0.3 mg ranibizumab (Lucentis, Genentech, South San Francisco, CA) or combination therapy with 0.3 mg ranibizumab and TRP guided by widefield fluorescein angiography. All eyes received 4 monthly ranibizumab injections followed by monthly examinations and pro re nata (PRN) re-treatment through 36 months. Targeted retinal photocoagulation was administered outside the macula to areas of retinal capillary nonperfusion plus a 1-disc area margin in the combination therapy arm at week 1, with re-treatment at months 6, 18, and 25, if indicated. Mean change in ETDRS BCVA from baseline and number of intravitreal injections administered. At baseline, mean age was 55 years, mean BCVA was 20/63 (Snellen equivalent), and mean central retinal subfield thickness (CRT) was 530 μm. Thirty-four eyes (85%) completed month 36, at which point mean BCVA improved 13.9 and 8.2 letters (P = 0.20) and mean CRT improved 302 and 152 μm (P = 0.03) in the monotherapy and combination therapy arms, respectively. The mean number of injections administered through month 36 was 24.4 (range, 10-34) and 27.1 (range, 12-36), with 73% (362/496) and 80% (433/538) of PRN injections administered (P = 0.004) in the monotherapy and combination therapy arms, respectively. Goldmann visual field isopter III-4e area decreased by 2% and 18% in the monotherapy and combination therapy arms, respectively (P = 0.30). In this 3-year randomized trial of 40 eyes with DME, there was no evidence that combination therapy with ranibizumab and TRP improved visual outcomes or reduced treatment burden compared with ranibizumab alone. Copyright © 2018 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Sharma, Namrata; Goel, Manik; Bansal, Shubha; Agarwal, Prakashchand; Titiyal, Jeewan S; Upadhyaya, Ashish D; Vajpayee, Rasik B
2013-06-01
To compare the equivalence of moxifloxacin 0.5% with a combination of fortified cefazolin sodium 5% and tobramycin sulfate 1.3% eye drops in the treatment of moderate bacterial corneal ulcers. Randomized, controlled, equivalence clinical trial. Microbiologically proven cases of bacterial corneal ulcers were enrolled in the study and were allocated randomly to 1 of the 2 treatment groups. Group A was given combination therapy (fortified cefazolin sodium 5% and tobramycin sulfate) and group B was given monotherapy (moxifloxacin 0.5%). The primary outcome variable for the study was percentage of the ulcers healed at 3 months. The secondary outcome variables were best-corrected visual acuity and resolution of infiltrates. Of a total of 224 patients with bacterial keratitis, 114 patients were randomized to group A, whereas 110 patients were randomized to group B. The mean ± standard deviation ulcer size in groups A and B were 4.2 ± 2 and 4.41 ± 1.5 mm, respectively. The prevalence of coagulase-negative Staphylococcus (40.9% in group A and 48.2% in group B) was similar in both the study groups. A complete resolution of keratitis and healing of ulcers occurred in 90 patients (81.8%) in group A and 88 patients (81.4%) in group B at 3 months. The observed percentage of healing at 3 months was less than the equivalence margin of 20%. Worsening of ulcer was seen in 18.2% cases in group A and in 18.5% cases in group B. Mean time to epithelialization was similar, and there was no significant difference in the 2 groups (P = 0.065). No serious events attributable to therapy were reported. Corneal healing using 0.5% moxifloxacin monotherapy is equivalent to that of combination therapy using fortified cefazolin and tobramycin in the treatment of moderate bacterial corneal ulcers. The author(s) have no proprietary or commercial interest in any materials discussed in this article. Copyright © 2013 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-01
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10-12 eV (i.e., range larger than a few 1 05 m ), we improve existing constraints by one order of magnitude to |α |<10-11 if the scalar field couples to the baryon number and to |α |<10-12 if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10-12 eV , the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
Bergé, Joel; Brax, Philippe; Métris, Gilles; Pernot-Borràs, Martin; Touboul, Pierre; Uzan, Jean-Philippe
2018-04-06
The existence of a light or massive scalar field with a coupling to matter weaker than gravitational strength is a possible source of violation of the weak equivalence principle. We use the first results on the Eötvös parameter by the MICROSCOPE experiment to set new constraints on such scalar fields. For a massive scalar field of mass smaller than 10^{-12} eV (i.e., range larger than a few 10^{5} m), we improve existing constraints by one order of magnitude to |α|<10^{-11} if the scalar field couples to the baryon number and to |α|<10^{-12} if the scalar field couples to the difference between the baryon and the lepton numbers. We also consider a model describing the coupling of a generic dilaton to the standard matter fields with five parameters, for a light field: We find that, for masses smaller than 10^{-12} eV, the constraints on the dilaton coupling parameters are improved by one order of magnitude compared to previous equivalence principle tests.
Ophthalmic randomized controlled trials reports: the statement of the hypothesis.
Lee, Chun Fan; Cheng, Andy Chi On; Fong, Daniel Yee Tak
2014-01-01
To evaluate whether the ophthalmic randomized controlled trials (RCTs) were designed properly, their hypotheses stated clearly, and their conclusions drawn correctly. A systematic review of 206 ophthalmic RCTs. The objective statement, methods, and results sections and the conclusions of RCTs published in 4 major general clinical ophthalmology journals from 2009 through 2011 were assessed. The clinical objective and specific hypothesis were the main outcome measures. The clinical objective of the trial was presented in 199 (96.6%) studies and the hypothesis was specified explicitly in 56 (27.2%) studies. One hundred ninety (92.2%) studies tested superiority. Among them, 17 (8.3%) studies comparing 2 or more active treatments concluded equal or similar effectiveness between the 2 arms after obtaining insignificant results. There were 5 noninferiority studies and 4 equivalence studies. How the treatments were compared was not mentioned in 1 of the noninferiority studies. Two of the equivalence studies did not specify the equivalence margin and used tests for detecting difference rather than confirming equivalence. The clinical objective commonly was stated, but the prospectively defined hypothesis tended to be understated in ophthalmic RCTs. Superiority was the most common type of comparison. Conclusions made in some of them with negative results were not consistent with the hypothesis, indicating that noninferiority or equivalence may be a more appropriate design. Flaws were common in the noninferiority and equivalence studies. Future ophthalmic researchers should choose the type of comparison carefully, specify the hypothesis clearly, and draw conclusions that are consistent with the hypothesis. Copyright © 2014 Elsevier Inc. All rights reserved.
Key-Generation Algorithms for Linear Piece In Hand Matrix Method
NASA Astrophysics Data System (ADS)
Tadaki, Kohtaro; Tsujii, Shigeo
The linear Piece In Hand (PH, for short) matrix method with random variables was proposed in our former work. It is a general prescription which can be applicable to any type of multivariate public-key cryptosystems for the purpose of enhancing their security. Actually, we showed, in an experimental manner, that the linear PH matrix method with random variables can certainly enhance the security of HFE against the Gröbner basis attack, where HFE is one of the major variants of multivariate public-key cryptosystems. In 1998 Patarin, Goubin, and Courtois introduced the plus method as a general prescription which aims to enhance the security of any given MPKC, just like the linear PH matrix method with random variables. In this paper we prove the equivalence between the plus method and the primitive linear PH matrix method, which is introduced by our previous work to explain the notion of the PH matrix method in general in an illustrative manner and not for a practical use to enhance the security of any given MPKC. Based on this equivalence, we show that the linear PH matrix method with random variables has the substantial advantage over the plus method with respect to the security enhancement. In the linear PH matrix method with random variables, the three matrices, including the PH matrix, play a central role in the secret-key and public-key. In this paper, we clarify how to generate these matrices and thus present two probabilistic polynomial-time algorithms to generate these matrices. In particular, the second one has a concise form, and is obtained as a byproduct of the proof of the equivalence between the plus method and the primitive linear PH matrix method.
Reft, Chester S; Runkel-Muller, Renate; Myrianthopoulos, Leon
2006-10-01
For intensity modulated radiation therapy (IMRT) treatments 6 MV photons are typically used, however, for deep seated tumors in the pelvic region, higher photon energies are increasingly being employed. IMRT treatments require more monitor units (MU) to deliver the same dose as conformal treatments, causing increased secondary radiation to tissues outside the treated area from leakage and scatter, as well as a possible increase in the neutron dose from photon interactions in the machine head. Here we provide in vivo patient and phantom measurements of the secondary out-of-field photon radiation and the neutron dose equivalent for 18 MV IMRT treatments. The patients were treated for prostate cancer with 18 MV IMRT at institutions using different therapy machines and treatment planning systems. Phantom exposures at the different facilities were used to compare the secondary photon and neutron dose equivalent between typical IMRT delivered treatment plans with a six field three-dimensional conformal radiotherapy (3DCRT) plan. For the in vivo measurements LiF thermoluminescent detectors (TLDs) and Al2O3 detectors using optically stimulated radiation were used to obtain the photon dose and CR-39 track etch detectors were used to obtain the neutron dose equivalent. For the phantom measurements a Bonner sphere (25.4 cm diameter) containing two types of TLDs (TLD-600 and TLD-700) having different thermal neutron sensitivities were used to obtain the out-of-field neutron dose equivalent. Our results showed that for patients treated with 18 MV IMRT the photon dose equivalent is greater than the neutron dose equivalent measured outside the treatment field and the neutron dose equivalent normalized to the prescription dose varied from 2 to 6 mSv/Gy among the therapy machines. The Bonner sphere results showed that the ratio of neutron equivalent doses for the 18 MV IMRT and 3DCRT prostate treatments scaled as the ratio of delivered MUs. We also observed differences in the measured neutron dose equivalent among the three therapy machines for both the in vivo and phantom exposures.
Hooten, W Michael; Qu, Wenchun; Townsend, Cynthia O; Judd, Jeffrey W
2012-04-01
Strength training and aerobic exercise have beneficial effects on pain in adults with fibromyalgia. However, the equivalence of strengthening and aerobic exercise has not been reported. The primary aim of this randomized equivalence trial involving patients with fibromyalgia admitted to an interdisciplinary pain treatment program was to test the hypothesis that strengthening (n=36) and aerobic (n=36) exercise have equivalent effects (95% confidence interval within an equivalence margin ± 8) on pain, as measured by the pain severity subscale of the Multidimensional Pain Inventory. Secondary aims included determining the effects of strengthening and aerobic exercise on peak Vo(2) uptake, leg strength, and pressure pain thresholds. In an intent-to-treat analysis, the mean (± standard deviation) pain severity scores for the strength and aerobic groups at study completion were 34.4 ± 11.5 and 37.6 ± 11.9, respectively. The group difference was -3.2 (95% confidence interval, -8.7 to 2.3), which was within the equivalence margin of Δ8. Significant improvements in pain severity (P<.001), peak Vo(2) (P<.001), strength (P<.001), and pain thresholds (P<.001) were observed from baseline to week 3 in the intent-to-treat analysis; however, patients in the aerobic group (mean change 2.0 ± 2.6 mL/kg/min) experienced greater gains (P<.013) in peak Vo(2) compared to the strength group (mean change 0.4 ± 2.6 mL/kg/min). Knowledge of the equivalence and physiological effects of exercise have important clinical implications that could allow practitioners to target exercise recommendations on the basis of comorbid medical conditions or patient preference for a particular type of exercise. This study found that strength and aerobic exercise had equivalent effects on reducing pain severity among patients with fibromyalgia. Copyright © 2012 International Association for the Study of Pain. Published by Elsevier B.V. All rights reserved.
Comparing Web, Group and Telehealth Formats of a Military Parenting Program
2017-06-01
directed approaches. Comparative effectiveness will be tested by specifying a non - equivalence hypothesis for group -based and web-facilitated relative...Comparative effectiveness will be tested by specifying a non - equivalence hypothesis fro group based and individualized facilitated relative to self-directed...documents for review and approval. 1a. Finalize human subjects protocol and consent documents for pilot group (N=5 families), and randomized controlled
Extremely metal-deficient red giants. IV - Equivalent widths for 36 halo giants
NASA Technical Reports Server (NTRS)
Luck, R. E.; Bond, H. E.
1985-01-01
Further work on a study of 36 metal-poor field red giants is reported. Chemical abundances previously determined were based on model stellar atmosphere analyses of equivalent widths from photographic image-tube echelle spectrograms obtained with with 4-m reflectors at Kitt Peak and Cerro Tololo. A tabulation of the equivalent-width data (a total of 18, 275 equivalent widths) is presented.
A Semantic Differential Evaluation of Attitudinal Outcomes of Introductory Physical Science.
ERIC Educational Resources Information Center
Hecht, Alfred Roland
This study was designed to assess the attitudinal outcomes of Introductory Physical Science (IPS) curriculum materials used in schools. Random samples of 240 students receiving IPS instruction and 240 non-science students were assigned to separate Solomon four-group designs with non-equivalent control groups. Random samples of 60 traditional…
Chazot, Charles; Terrat, Jean Claude; Dumoulin, Alexandre; Ang, Kim-Seng; Gassia, Jean Paul; Chedid, Khalil; Maurice, Francois; Canaud, Bernard
2009-02-01
Darbepoetin alfa is an erythropoiesis-stimulating agent (ESA) used either intravenously or subcutaneously with no dose penalty; however, the direct switch from subcutaneous recombinant human erythropoietin (rHuEPO) to intravenous darbepoetin has barely been studied. To establish the equivalence of a direct switch from subcutaneous rHuEPO to intravenous darbepoetin versus an indirect switch from subcutaneous rHuEPO to intravenous darbepoetin after 2 months of subcutaneous darbepoetin in patients undergoing hemodialysis. In this open, randomized, 6-month, prospective study, patients with end-stage kidney disease who were on hemodialysis were randomized into 2 groups: direct switch from subcutaneous rHuEPO to intravenous darbepoetin (group 1) and indirect switch from subcutaneous rHuEPO to intravenous darbepoetin after 2 months of subcutaneous darbepoetin (group 2). A third, nonrandomized group (control), consisting of patients treated with intravenous rHuEPO who were switched to intravenous darbepoetin, was also studied to reflect possible variations of hemoglobin (Hb) levels due to change from one type of ESA to the other. The primary outcome was the proportion of patients with stable Hb levels at month 6. Secondary endpoints included Hb stability at month 3, dosage requirements for darbepoetin, and safety of the administration route. Among 154 randomized patients, the percentages with stable Hb levels were equivalent in groups 1 and 2, respectively, at month 3 (86.0% vs 91.3%) and month 6 (82.1% vs 81.6%; difference -0.5 [90% CI -12.8 to 11.8]). Mean Hb levels between baseline and month 6 remained stable in both groups, with no variation in mean darbepoetin dose. Mean ferritin levels remained above 100 microg/L in the 3 groups during the whole study, and darbepoetin was well tolerated. This study has shown equivalent efficacy on Hb stability without the need for dosage increase in patients switched directly from subcutaneous rHuEPO to intravenous darbepoetin.
An equivalent source model of the satellite-altitude magnetic anomaly field over Australia
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Johnson, B. D.; Langel, R. A.
1980-01-01
The low-amplitude, long-wavelength magnetic anomaly field measured between 400 and 700 km elevation over Australia by the POGO satellites is modeled by means of the equivalent source technique. Magnetic dipole moments are computed for a latitude-longitude array of dipole sources on the earth's surface such that the dipoles collectively give rise to a field which makes a least squares best fit to that observed. The distribution of magnetic moments is converted to a model of apparent magnetization contrast in a layer of constant (40 km) thickness, which contains information equivalent to the lateral variation in the vertical integral of magnetization down to the Curie isotherm and can be transformed to a model of variable thickness magnetization. It is noted that the closest equivalent source spacing giving a stable solution is about 2.5 deg, corresponding to about half the mean data elevation, and that the magnetization distribution correlates well with some of the principle tectonic elements of Australia.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Choi, Minseok; Sapsis, Themistoklis P.; Karniadakis, George Em, E-mail: george_karniadakis@brown.edu
2014-08-01
The Karhunen–Lòeve (KL) decomposition provides a low-dimensional representation for random fields as it is optimal in the mean square sense. Although for many stochastic systems of practical interest, described by stochastic partial differential equations (SPDEs), solutions possess this low-dimensional character, they also have a strongly time-dependent form and to this end a fixed-in-time basis may not describe the solution in an efficient way. Motivated by this limitation of standard KL expansion, Sapsis and Lermusiaux (2009) [26] developed the dynamically orthogonal (DO) field equations which allow for the simultaneous evolution of both the spatial basis where uncertainty ‘lives’ but also themore » stochastic characteristics of uncertainty. Recently, Cheng et al. (2013) [28] introduced an alternative approach, the bi-orthogonal (BO) method, which performs the exact same tasks, i.e. it evolves the spatial basis and the stochastic characteristics of uncertainty. In the current work we examine the relation of the two approaches and we prove theoretically and illustrate numerically their equivalence, in the sense that one method is an exact reformulation of the other. We show this by deriving a linear and invertible transformation matrix described by a matrix differential equation that connects the BO and the DO solutions. We also examine a pathology of the BO equations that occurs when two eigenvalues of the solution cross, resulting in an instantaneous, infinite-speed, internal rotation of the computed spatial basis. We demonstrate that despite the instantaneous duration of the singularity this has important implications on the numerical performance of the BO approach. On the other hand, it is observed that the BO is more stable in nonlinear problems involving a relatively large number of modes. Several examples, linear and nonlinear, are presented to illustrate the DO and BO methods as well as their equivalence.« less
On the equivalence among stress tensors in a gauge-fluid system
NASA Astrophysics Data System (ADS)
Mitra, Arpan Krishna; Banerjee, Rabin; Ghosh, Subir
2017-12-01
In this paper, we bring out the subtleties involved in the study of a first-order relativistic field theory with auxiliary field variables playing an essential role. In particular, we discuss the nonisentropic Eulerian (or Hamiltonian) fluid model. Interactions are introduced by coupling the fluid to a dynamical Maxwell (U(1)) gauge field. This dynamical nature of the gauge field is crucial in showing the equivalence, on the physical subspace, of the stress tensor derived from two definitions, i.e. the canonical (Noether) one and the symmetric one. In the conventional equal-time formalism, we have shown that the generators of the space-time transformations obtained from these two definitions agree modulo the Gauss constraint. This equivalence in the physical sector has been achieved only because of the dynamical nature of the gauge fields. Subsequently, we have explicitly demonstrated the validity of the Schwinger condition. A detailed analysis of the model in lightcone formalism has also been done where several interesting features are revealed.
Staggered chiral random matrix theory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Osborn, James C.
2011-02-01
We present a random matrix theory for the staggered lattice QCD Dirac operator. The staggered random matrix theory is equivalent to the zero-momentum limit of the staggered chiral Lagrangian and includes all taste breaking terms at their leading order. This is an extension of previous work which only included some of the taste breaking terms. We will also present some results for the taste breaking contributions to the partition function and the Dirac eigenvalues.
Pseudo-random tool paths for CNC sub-aperture polishing and other applications.
Dunn, Christina R; Walker, David D
2008-11-10
In this paper we first contrast classical and CNC polishing techniques in regard to the repetitiveness of the machine motions. We then present a pseudo-random tool path for use with CNC sub-aperture polishing techniques and report polishing results from equivalent random and raster tool-paths. The random tool-path used - the unicursal random tool-path - employs a random seed to generate a pattern which never crosses itself. Because of this property, this tool-path is directly compatible with dwell time maps for corrective polishing. The tool-path can be used to polish any continuous area of any boundary shape, including surfaces with interior perforations.
Performance Enhancement of the RatCAP Awake Rate Brain PET System
DOE Office of Scientific and Technical Information (OSTI.GOV)
Vaska, P.; Vaska, P.; Woody, C.
The first full prototype of the RatCAP PET system, designed to image the brain of a rat while conscious, has been completed. Initial results demonstrated excellent spatial resolution, 1.8 mm FWHM with filtered backprojection and <1.5 mm FWHM with a Monte Carlo based MLEM method. However, noise equivalent countrate studies indicated the need for better timing to mitigate the effect of randoms. Thus, the front-end ASIC has been redesigned to minimize time walk, an accurate coincidence time alignment method has been implemented, and a variance reduction technique for the randoms is being developed. To maximize the quantitative capabilities required formore » neuroscience, corrections are being implemented and validated for positron range and photon noncollinearity, scatter (including outside the field of view), attenuation, randoms, and detector efficiency (deadtime is negligible). In addition, a more robust and compact PCI-based optical data acquisition system has been built to replace the original VME-based system while retaining the linux-based data processing and image reconstruction codes. Finally, a number of new animal imaging experiments have been carried out to demonstrate the performance of the RatCAP in real imaging situations, including an F-18 fluoride bone scan, a C-11 raclopride scan, and a dynamic C-11 methamphetamine scan.« less
Standard, Random, and Optimum Array conversions from Two-Pole resistance data
Rucker, D. F.; Glaser, Danney R.
2014-09-01
We present an array evaluation of standard and nonstandard arrays over a hydrogeological target. We develop the arrays by linearly combining data from the pole-pole (or 2-pole) array. The first test shows that reconstructed resistances for the standard Schlumberger and dipoledipole arrays are equivalent or superior to the measured arrays in terms of noise, especially at large geometric factors. The inverse models for the standard arrays also confirm what others have presented in terms of target resolvability, namely the dipole-dipole array has the highest resolution. In the second test, we reconstruct random electrode combinations from the 2-pole data segregated intomore » inner, outer, and overlapping dipoles. The resistance data and inverse models from these randomized arrays show those with inner dipoles to be superior in terms of noise and resolution and that overlapping dipoles can cause model instability and low resolution. Finally, we use the 2-pole data to create an optimized array that maximizes the model resolution matrix for a given electrode geometry. The optimized array produces the highest resolution and target detail. Thus, the tests demonstrate that high quality data and high model resolution can be achieved by acquiring field data from the pole-pole array.« less
[On the present situation in psychotherapy and its implications - A critical analysis of the facts].
Tschuschke, Volker; Freyberger, Harald J
2015-01-01
The currently dominating research paradigm in evidence-based medicine is expounded and discussed regarding the problems deduced from so-called empirically supported treatments (EST) in psychology and psychotherapy. Prevalent political and economic as well as ideological backgrounds influence the present dominance of the medical model in psychotherapy by implementing the randomized-controlled research design as the standard in the field. It has been demonstrated that randomized controlled trials (RCTs) are inadequate in psychotherapy research, not the least because of the high complexity of the psychotherapy and the relatively weak role of the treatment concept in the change process itself. All major meta-analyses show that the Dodo bird verdict is still alive, thereby demonstrating that the medical model in psychotherapy with its RCT paradigm cannot explain the equivalence paradox. The medical model is inappropriate, so that the contextual model is proposed as an alternative. Extensive process-outcome research is suggested as the only viable and reasonable way to identify highly complex interactions between the many factors regularly involved in change processes in psychotherapy.
Neurocognitive sparing of desktop microbeam irradiation.
Bazyar, Soha; Inscoe, Christina R; Benefield, Thad; Zhang, Lei; Lu, Jianping; Zhou, Otto; Lee, Yueh Z
2017-08-11
Normal tissue toxicity is the dose-limiting side effect of radiotherapy. Spatial fractionation irradiation techniques, like microbeam radiotherapy (MRT), have shown promising results in sparing the normal brain tissue. Most MRT studies have been conducted at synchrotron facilities. With the aim to make this promising treatment more available, we have built the first desktop image-guided MRT device based on carbon nanotube x-ray technology. In the current study, our purpose was to evaluate the effects of MRT on the rodent normal brain tissue using our device and compare it with the effect of the integrated equivalent homogenous dose. Twenty-four, 8-week-old male C57BL/6 J mice were randomly assigned to three groups: MRT, broad-beam (BB) and sham. The hippocampal region was irradiated with two parallel microbeams in the MRT group (beam width = 300 μm, center-to-center = 900 μm, 160 kVp). The BB group received the equivalent integral dose in the same area of their brain. Rotarod, marble burying and open-field activity tests were done pre- and every month post-irradiation up until 8 months to evaluate the cognitive changes and potential irradiation side effects on normal brain tissue. The open-field activity test was substituted by Barnes maze test at 8th month. A multilevel model, random coefficients approach was used to evaluate the longitudinal and temporal differences among treatment groups. We found significant differences between BB group as compared to the microbeam-treated and sham mice in the number of buried marble and duration of the locomotion around the open-field arena than shams. Barnes maze revealed that BB mice had a lower capacity for spatial learning than MRT and shams. Mice in the BB group tend to gain weight at the slower pace than shams. No meaningful differences were found between MRT and sham up until 8-month follow-up using our measurements. Applying MRT with our newly developed prototype compact CNT-based image-guided MRT system utilizing the current irradiation protocol can better preserve the integrity of normal brain tissue. Consequently, it enables applying higher irradiation dose that promises better tumor control. Further studies are required to evaluate the full extent effects of this novel modality.
Luszik-Bhadra, M; Lacoste, V; Reginatto, M; Zimbal, A
2007-01-01
Workplace neutron spectra from nuclear facilities obtained within the European project EVIDOS are compared with those of the simulated workplace fields CANEL and SIGMA and fields set-up with radionuclide sources at the PTB. Contributions of neutrons to ambient dose equivalent and personal dose equivalent are given in three energy intervals (for thermal, intermediate and fast neutrons) together with the corresponding direction distribution, characterised by three different types of distributions (isotropic, weakly directed and directed). The comparison shows that none of the simulated workplace fields investigated here can model all the characteristics of the fields observed at power reactors.
Graded-index fibers, Wigner-distribution functions, and the fractional Fourier transform.
Mendlovic, D; Ozaktas, H M; Lohmann, A W
1994-09-10
Two definitions of a fractional Fourier transform have been proposed previously. One is based on the propagation of a wave field through a graded-index medium, and the other is based on rotating a function's Wigner distribution. It is shown that both definitions are equivalent. An important result of this equivalency is that the Wigner distribution of a wave field rotates as the wave field propagates through a quadratic graded-index medium. The relation with ray-optics phase space is discussed.
Porous medium acoustics of wave-induced vorticity diffusion
NASA Astrophysics Data System (ADS)
Müller, T. M.; Sahay, P. N.
2011-02-01
A theory for attenuation and dispersion of elastic waves due to wave-induced generation of vorticity at pore-scale heterogeneities in a macroscopically homogeneous porous medium is developed. The diffusive part of the vorticity field associated with a viscous wave in the pore space—the so-called slow shear wave—is linked to the porous medium acoustics through incorporation of the fluid strain rate tensor of a Newtonian fluid in the poroelastic constitutive relations. The method of statistical smoothing is then used to derive dynamic-equivalent elastic wave velocities accounting for the conversion scattering process into the diffusive slow shear wave in the presence of randomly distributed pore-scale heterogeneities. The result is a simple model for wave attenuation and dispersion associated with the transition from viscosity- to inertia-dominated flow regime.
Kodama, Sayuri; Fujii, Nobuya; Furuhata, Tadashi; Sakurai, Naoko; Fujiwara, Yoshinori; Hoshi, Tanji
2015-01-01
Although dietary quality in middle-age and the prime age of a person's work career might be determined by positive emotional well-being based on socioeconomic status (SES), causation among determinants of dietary quality still remains unclear. Our purpose was to elucidate the structural relationships among five-year prior dietary quality, equivalent income, emotional well-being, and a five-year subjective health by sex and age group separately. In 2003, 10,000 middle-aged urban dwellers aged 40-64 years, who lived in ward A in the Tokyo metropolitan area, were randomly selected and a questionnaire survey was conducted by mail. In 2008, we made a follow-up survey for dwellers, and were able to gather their survival status. A total of 2507, middle-aged men (n = 1112) and women (n = 1395), were examined at baseline. We created three latent variables for a structural equation modeling (SEM), five-year subjective health reported in 2003 and in 2008, dietary quality of principle food groups diversity and eating behavior in 2003, and emotional well-being constructed by enjoyment & ikigai (meaning of life) and by close people in 2003. Equivalent income in 2003 was calculated as SES indicator. In the SEM analysis of both men and women, there was an indirect effect of the equivalent income on dietary quality and on five-year subjective health, via emotional well-being explained by ikigai and having comforting people close to the individuals, significantly. There tended to be a larger direct effect of emotional well-being on the dietary quality in men than in women, and also a larger effect accompanying with aging. In women, there was a large direct effect of equivalent income on dietary quality than in men. When examined comprehensively, there appeared to be a larger effect of five-year prior equivalent income on subjective health during five-year in men than in women. This study suggests that it is necessary to support the improvement of dietary quality in middle age by considering the characteristics of sex and age group and also by providing supportive environment to enhance emotional well-being based on equivalent income, cooperating different field professionals to provide such as employment or community support program.
Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families
2008-03-01
of Defense, or the United States Government . AFIT/GCS/ENG/08-12 Obfuscation Framework Based on Functionally Equivalent Combinatorial Logic Families...time, United States policy strongly encourages the sale and transfer of some military equipment to foreign governments and makes it easier for...Proceedings of the International Conference on Availability, Reliability and Security, 2007. 14. McDonald, J. Todd and Alec Yasinsac. “Of unicorns and random
A randomized trial of teaching clinical skills using virtual and live standardized patients.
Triola, M; Feldman, H; Kalet, A L; Zabar, S; Kachur, E K; Gillespie, C; Anderson, M; Griesser, C; Lipkin, M
2006-05-01
We developed computer-based virtual patient (VP) cases to complement an interactive continuing medical education (CME) course that emphasizes skills practice using standardized patients (SP). Virtual patient simulations have the significant advantages of requiring fewer personnel and resources, being accessible at any time, and being highly standardized. Little is known about the educational effectiveness of these new resources. We conducted a randomized trial to assess the educational effectiveness of VPs and SPs in teaching clinical skills. To determine the effectiveness of VP cases when compared with live SP cases in improving clinical skills and knowledge. Randomized trial. Fifty-five health care providers (registered nurses 45%, physicians 15%, other provider types 40%) who attended a CME program. Participants were randomized to receive either 4 live cases (n=32) or 2 live and 2 virtual cases (n=23). Other aspects of the course were identical for both groups. Participants in both groups were equivalent with respect to pre-post workshop improvement in comfort level (P=.66) and preparedness to respond (P=.61), to screen (P=.79), and to care (P=.055) for patients using the skills taught. There was no difference in subjective ratings of effectiveness of the VPs and SPs by participants who experienced both (P=.79). Improvement in diagnostic abilities were equivalent in groups who experienced cases either live or virtually. Improvements in performance and diagnostic ability were equivalent between the groups and participants rated VP and SP cases equally. Including well-designed VPs has a potentially powerful and efficient place in clinical skills training for practicing health care workers.
Equivalence of Szegedy's and coined quantum walks
NASA Astrophysics Data System (ADS)
Wong, Thomas G.
2017-09-01
Szegedy's quantum walk is a quantization of a classical random walk or Markov chain, where the walk occurs on the edges of the bipartite double cover of the original graph. To search, one can simply quantize a Markov chain with absorbing vertices. Recently, Santos proposed two alternative search algorithms that instead utilize the sign-flip oracle in Grover's algorithm rather than absorbing vertices. In this paper, we show that these two algorithms are exactly equivalent to two algorithms involving coined quantum walks, which are walks on the vertices of the original graph with an internal degree of freedom. The first scheme is equivalent to a coined quantum walk with one walk step per query of Grover's oracle, and the second is equivalent to a coined quantum walk with two walk steps per query of Grover's oracle. These equivalences lie outside the previously known equivalence of Szegedy's quantum walk with absorbing vertices and the coined quantum walk with the negative identity operator as the coin for marked vertices, whose precise relationships we also investigate.
NASA Astrophysics Data System (ADS)
Poormohammadi, Jaber; Rezagholizadeh, Hessam
The idea of action immediate propagation has been in physicists' mind from the beginning, until Faraday raised the idea of delayed propagation. Using this idea and the delayed theory of fields, we face consequences which can be interesting for anyone who has learned physics. We can mention non-equivalency between stationary frames and moving frames, dependency of field to medium, different velocity barriers for different mediums and non-equivalency of inertial reference frames are among these consequences. By designing an experiment we can challenge this theory and its consequences. All of these sections processed in the article titled ''The delayed theory of fields''.
Higher-order gravity and the classical equivalence principle
NASA Astrophysics Data System (ADS)
Accioly, Antonio; Herdy, Wallace
2017-11-01
As is well known, the deflection of any particle by a gravitational field within the context of Einstein’s general relativity — which is a geometrical theory — is, of course, nondispersive. Nevertheless, as we shall show in this paper, the mentioned result will change totally if the bending is analyzed — at the tree level — in the framework of higher-order gravity. Indeed, to first order, the deflection angle corresponding to the scattering of different quantum particles by the gravitational field mentioned above is not only spin dependent, it is also dispersive (energy-dependent). Consequently, it violates the classical equivalence principle (universality of free fall, or equality of inertial and gravitational masses) which is a nonlocal principle. However, contrary to popular belief, it is in agreement with the weak equivalence principle which is nothing but a statement about purely local effects. It is worthy of note that the weak equivalence principle encompasses the classical equivalence principle locally. We also show that the claim that there exists an incompatibility between quantum mechanics and the weak equivalence principle, is incorrect.
Leuchter, Russia Ha-Vinh; Gui, Laura; Poncet, Antoine; Hagmann, Cornelia; Lodygensky, Gregory Anton; Martin, Ernst; Koller, Brigitte; Darqué, Alexandra; Bucher, Hans Ulrich; Hüppi, Petra Susan
2014-08-27
Premature infants are at risk of developing encephalopathy of prematurity, which is associated with long-term neurodevelopmental delay. Erythropoietin was shown to be neuroprotective in experimental and retrospective clinical studies. To determine if there is an association between early high-dose recombinant human erythropoietin treatment in preterm infants and biomarkers of encephalopathy of prematurity on magnetic resonance imaging (MRI) at term-equivalent age. A total of 495 infants were included in a randomized, double-blind, placebo-controlled study conducted in Switzerland between 2005 and 2012. In a nonrandomized subset of 165 infants (n=77 erythropoietin; n=88 placebo), brain abnormalities were evaluated on MRI acquired at term-equivalent age. Participants were randomly assigned to receive recombinant human erythropoietin (3000 IU/kg; n=256) or placebo (n=239) intravenously before 3 hours, at 12 to 18 hours, and at 36 to 42 hours after birth. The primary outcome of the trial, neurodevelopment at 24 months, has not yet been assessed. The secondary outcome, white matter disease of the preterm infant, was semiquantitatively assessed from MRI at term-equivalent age based on an established scoring method. The resulting white matter injury and gray matter injury scores were categorized as normal or abnormal according to thresholds established in the literature by correlation with neurodevelopmental outcome. At term-equivalent age, compared with untreated controls, fewer infants treated with recombinant human erythropoietin had abnormal scores for white matter injury (22% [17/77] vs 36% [32/88]; adjusted risk ratio [RR], 0.58; 95% CI, 0.35-0.96), white matter signal intensity (3% [2/77] vs 11% [10/88]; adjusted RR, 0.20; 95% CI, 0.05-0.90), periventricular white matter loss (18% [14/77] vs 33% [29/88]; adjusted RR, 0.53; 95% CI, 0.30-0.92), and gray matter injury (7% [5/77] vs 19% [17/88]; adjusted RR, 0.34; 95% CI, 0.13-0.89). In an analysis of secondary outcomes of a randomized clinical trial of preterm infants, high-dose erythropoietin treatment within 42 hours after birth was associated with a reduced risk of brain injury on MRI. These findings require assessment in a randomized trial designed primarily to assess this outcome as well as investigation of the association with neurodevelopmental outcomes. clinicaltrials.gov Identifier: NCT00413946.
Improved Equivalent Linearization Implementations Using Nonlinear Stiffness Evaluation
NASA Technical Reports Server (NTRS)
Rizzi, Stephen A.; Muravyov, Alexander A.
2001-01-01
This report documents two new implementations of equivalent linearization for solving geometrically nonlinear random vibration problems of complicated structures. The implementations are given the acronym ELSTEP, for "Equivalent Linearization using a STiffness Evaluation Procedure." Both implementations of ELSTEP are fundamentally the same in that they use a novel nonlinear stiffness evaluation procedure to numerically compute otherwise inaccessible nonlinear stiffness terms from commercial finite element programs. The commercial finite element program MSC/NASTRAN (NASTRAN) was chosen as the core of ELSTEP. The FORTRAN implementation calculates the nonlinear stiffness terms and performs the equivalent linearization analysis outside of NASTRAN. The Direct Matrix Abstraction Program (DMAP) implementation performs these operations within NASTRAN. Both provide nearly identical results. Within each implementation, two error minimization approaches for the equivalent linearization procedure are available - force and strain energy error minimization. Sample results for a simply supported rectangular plate are included to illustrate the analysis procedure.
Carey, M E; Mandalia, P K; Daly, H; Gray, L J; Hale, R; Martin Stacey, L; Taub, N; Skinner, T C; Stone, M; Heller, S; Khunti, K; Davies, M J
2014-11-01
To develop and test a format of delivery of diabetes self-management education by paired professional and lay educators. We conducted an equivalence trial with non-randomized participant allocation to a Diabetes Education and Self Management for Ongoing and Newly Diagnosed Type 2 diabetes (DESMOND) course, delivered in the standard format by two trained healthcare professional educators (to the control group) or by one trained lay educator and one professional educator (to the intervention group). A total of 260 people with Type 2 diabetes diagnosed within the previous 12 months were referred for self-management education as part of routine care and attended either a control or intervention format DESMOND course. The primary outcome measure was change in illness coherence score (derived from the Diabetes Illness Perception Questionnaire-Revised) between baseline and 4 months after attending education sessions. Secondary outcome measures included change in HbA1c level. The trial was conducted in four primary care organizations across England and Scotland. The 95% CI for the between-group difference in positive change in coherence scores was within the pre-set limits of equivalence (difference = 0.22, 95% CI 1.07 to 1.52). Equivalent changes related to secondary outcome measures were also observed, including equivalent reductions in HbA1c levels. Diabetes education delivered jointly by a trained lay person and a healthcare professional educator with the same educator role can provide equivalent patient benefits. This could provide a method that increases capacity, maintains quality and is cost-effective, while increasing access to self-management education. © 2014 The Authors. Diabetic Medicine © 2014 Diabetes UK.
Meuldijk, D; Carlier, I V E; van Vliet, I M; van Veen, T; Wolterbeek, R; van Hemert, A M; Zitman, F G
2016-03-01
Depressive and anxiety disorders contribute to a high disease burden. This paper investigates whether concise formats of cognitive behavioral- and/or pharmacotherapy are equivalent with longer standard care in the treatment of depressive and/or anxiety disorders in secondary mental health care. A pragmatic randomized controlled equivalence trial was conducted at five Dutch outpatient Mental Healthcare Centers (MHCs) of the Regional Mental Health Provider (RMHP) 'Rivierduinen'. Patients (aged 18-65 years) with a mild to moderate anxiety and/or depressive disorder, were randomly allocated to concise or standard care. Data were collected at baseline, 3, 6 and 12 months by Routine Outcome Monitoring (ROM). Primary outcomes were the Brief Symptom Inventory (BSI) and the Web Screening Questionnaire (WSQ). We used Generalized Estimating Equations (GEE) to assess outcomes. Between March 2010 and December 2012, 182 patients, were enrolled (n=89 standard care; n=93 concise care). Both intention-to-treat and per-protocol analyses demonstrated equivalence of concise care and standard care at all time points. Severity of illness reduced, and both treatments improved patient's general health status and subdomains of quality of life. Moreover, in concise care, the beneficial effects started earlier. Concise care has the potential to be a feasible and promising alternative to longer standard secondary mental health care in the treatment of outpatients with a mild to moderate depressive and/or anxiety disorder. For future research, we recommend adhering more strictly to the concise treatment protocols to further explore the beneficial effects of the concise treatment. The study is registered in the Netherlands Trial Register, number NTR2590. Clinicaltrials.gov identifier: NCT01643642. Copyright © 2015 Elsevier Inc. All rights reserved.
Paint-only is equivalent to scrub-and-paint in preoperative preparation of abdominal surgery sites.
Ellenhorn, Joshua D I; Smith, David D; Schwarz, Roderich E; Kawachi, Mark H; Wilson, Timothy G; McGonigle, Kathryn F; Wagman, Lawrence D; Paz, I Benjamin
2005-11-01
Antiseptic preoperative skin site preparation is used to prepare the operative site before making a surgical incision. The goal of this preparation is a reduction in postoperative wound infection. The most straightforward technique necessary to achieve this goal remains controversial. A prospective randomized trial was designed to prove equivalency for two commonly used techniques of surgical skin site preparation. Two hundred thirty-four patients undergoing nonlaparoscopic abdominal operations were consented for the trial. Exclusion criteria included presence of active infection at the time of operation, neutropenia, history of skin reaction to iodine, or anticipated insertion of prosthetic material at the time of operation. Patients were randomized to receive either a vigorous 5-minute scrub with povidone-iodine soap, followed by absorption with a sterile towel, and a paint with aqueous povidone-iodine or surgical site preparation with a povidone-iodine paint only. The primary end point of the study was wound infection rate at 30 days, defined as presence of clinical signs of infection requiring therapeutic intervention. Patients randomized to the scrub-and-paint arm (n = 115) and the paint-only arm (n = 119) matched at baseline with respect to age, comorbidity, wound classification, mean operative time, placement of drains, prophylactic antibiotic use, and surgical procedure (all p > 0.09). Wound infection occurred in 12 (10%) scrub-and-paint patients, and 12 (10%) paint-only patients. Based on our predefined equivalency parameters, we conclude equivalence of infection rates between the two preparations. Preoperative preparation of the abdomen with a scrub with povidone-iodine soap followed by a paint with aqueous povidone-iodine can be abandoned in favor of a paint with aqueous povidone-iodine alone. This change will result in reductions in operative times and costs.
Response of a tissue equivalent proportional counter to neutrons
NASA Technical Reports Server (NTRS)
Badhwar, G. D.; Robbins, D. E.; Gibbons, F.; Braby, L. A.
2002-01-01
The absorbed dose as a function of lineal energy was measured at the CERN-EC Reference-field Facility (CERF) using a 512-channel tissue equivalent proportional counter (TEPC), and neutron dose equivalent response evaluated. Although there are some differences, the measured dose equivalent is in agreement with that measured by the 16-channel HANDI tissue equivalent counter. Comparison of TEPC measurements with those made by a silicon solid-state detector for low linear energy transfer particles produced by the same beam, is presented. The measurements show that about 4% of dose equivalent is delivered by particles heavier than protons generated in the conducting tissue equivalent plastic. c2002 Elsevier Science Ltd. All rights reserved.
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1996-02-01
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting from the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
Low-order black-box models for control system design in large power systems
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kamwa, I.; Trudel, G.; Gerin-Lajoie, L.
1995-12-31
The paper studies two multi-input multi-output (MIMO) procedures for the identification of low-order state-space models of power systems, by probing the network in open loop with low-energy pulses or random signals. Although such data may result from actual measurements, the development assumes simulated responses from a transient stability program, hence benefiting form the existing large base of stability models. While pulse data is processed using the eigensystem realization algorithm, the analysis of random responses is done by means of subspace identification methods. On a prototype Hydro-Quebec power system, including SVCs, DC lines, series compensation, and more than 1,100 buses, itmore » is verified that the two approaches are equivalent only when strict requirements are imposed on the pulse length and magnitude. The 10th-order equivalent models derived by random-signal probing allow for effective tuning of decentralized power system stabilizers (PSSs) able to damp both local and very slow inter-area modes.« less
NASA Astrophysics Data System (ADS)
Yeung, Chuck
2018-06-01
The assumption that the local order parameter is related to an underlying spatially smooth auxiliary field, u (r ⃗,t ) , is a common feature in theoretical approaches to non-conserved order parameter phase separation dynamics. In particular, the ansatz that u (r ⃗,t ) is a Gaussian random field leads to predictions for the decay of the autocorrelation function which are consistent with observations, but distinct from predictions using alternative theoretical approaches. In this paper, the auxiliary field is obtained directly from simulations of the time-dependent Ginzburg-Landau equation in two and three dimensions. The results show that u (r ⃗,t ) is equivalent to the distance to the nearest interface. In two dimensions, the probability distribution, P (u ) , is well approximated as Gaussian except for small values of u /L (t ) , where L (t ) is the characteristic length-scale of the patterns. The behavior of P (u ) in three dimensions is more complicated; the non-Gaussian region for small u /L (t ) is much larger than that in two dimensions but the tails of P (u ) begin to approach a Gaussian form at intermediate times. However, at later times, the tails of the probability distribution appear to decay faster than a Gaussian distribution.
McCann, Melinda C; Trujillo, William A; Riordan, Susan G; Sorbet, Roy; Bogdanova, Natalia N; Sidhu, Ravinder S
2007-05-16
The next generation of biotechnology-derived products with the combined benefit of herbicide tolerance and insect protection (MON 88017) was developed to withstand feeding damage caused by the coleopteran pest corn rootworm and over-the-top applications of glyphosate, the active ingredient in Roundup herbicides. As a part of a larger safety and characterization assessment, MON 88017 was grown under field conditions at geographically diverse locations within the United States and Argentina during the 2002 and 2003-2004 field seasons, respectively, along with a near-isogenic control and other conventional corn hybrids for compositional assessment. Field trials were conducted using a randomized complete block design with three replication blocks at each site. Corn forage samples were harvested at the late dough/early dent stage, ground, and analyzed for the concentration of proximate constituents, fibers, and minerals. Samples of mature grain were harvested, ground, and analyzed for the concentration of proximate constituents, fiber, minerals, amino acids, fatty acids, vitamins, antinutrients, and secondary metabolites. The results showed that the forage and grain from MON 88017 are compositionally equivalent to forage and grain from control and conventional corn hybrids.
Note on the equivalence of a barotropic perfect fluid with a k-essence scalar field
DOE Office of Scientific and Technical Information (OSTI.GOV)
Arroja, Frederico; Sasaki, Misao
In this brief report, we obtain the necessary and sufficient condition for a class of noncanonical single scalar field models to be exactly equivalent to barotropic perfect fluids, under the assumption of an irrotational fluid flow. An immediate consequence of this result is that the nonadiabatic pressure perturbation in this class of scalar field systems vanishes exactly at all orders in perturbation theory and on all scales. The Lagrangian for this general class of scalar field models depends on both the kinetic term and the value of the field. However, after a field redefinition, it can be effectively cast inmore » the form of a purely kinetic k-essence model.« less
Yonai, Shunsuke; Matsufuji, Naruhiro; Akahane, Keiichi
2018-04-23
The aim of this work was to estimate typical dose equivalents to out-of-field organs during carbon-ion radiotherapy (CIRT) with a passive beam for prostate cancer treatment. Additionally, sensitivity analyses of organ doses for various beam parameters and phantom sizes were performed. Because the CIRT out-of-field dose depends on the beam parameters, the typical values of those parameters were determined from statistical data on the target properties of patients who received CIRT at the Heavy-Ion Medical Accelerator in Chiba (HIMAC). Using these typical beam-parameter values, out-of-field organ dose equivalents during CIRT for typical prostate treatment were estimated by Monte Carlo simulations using the Particle and Heavy-Ion Transport Code System (PHITS) and the ICRP reference phantom. The results showed that the dose decreased with distance from the target, ranging from 116 mSv in the testes to 7 mSv in the brain. The organ dose equivalents per treatment dose were lower than those either in 6-MV intensity-modulated radiotherapy or in brachytherapy with an Ir-192 source for organs within 40 cm of the target. Sensitivity analyses established that the differences from typical values were within ∼30% for all organs, except the sigmoid colon. The typical out-of-field organ dose equivalents during passive-beam CIRT were shown. The low sensitivity of the dose equivalent in organs farther than 20 cm from the target indicated that individual dose assessments required for retrospective epidemiological studies may be limited to organs around the target in cases of passive-beam CIRT for prostate cancer. Copyright © 2018 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
Fritz, Ann‐Kristina; Amrein, Irmgard
2017-01-01
Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water‐maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water‐maze acquisition and in the open field, males tended to perform less reliably in the water‐maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water‐maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water‐maze are used in preclinical research. PMID:28654717
Fritz, Ann-Kristina; Amrein, Irmgard; Wolfer, David P
2017-09-01
Although most nervous system diseases affect women and men differentially, most behavioral studies using mouse models do not include subjects of both sexes. Many researchers worry that data of female mice may be unreliable due to the estrous cycle. Here, we retrospectively evaluated sex effects on coefficient of variation (CV) in 5,311 mice which had performed the same place navigation protocol in the water-maze and in 4,554 mice tested in the same open field arena. Confidence intervals for Cohen's d as measure of effect size were computed and tested for equivalence with 0.2 as equivalence margin. Despite the large sample size, only few behavioral parameters showed a significant sex effect on CV. Confidence intervals of effect size indicated that CV was either equivalent or showed a small sex difference at most, accounting for less than 2% of total group to group variation of CV. While female mice were potentially slightly more variable in water-maze acquisition and in the open field, males tended to perform less reliably in the water-maze probe trial. In addition to evaluating variability, we also directly compared mean performance of female and male mice and found them to be equivalent in both water-maze place navigation and open field exploration. Our data confirm and extend other large scale studies in demonstrating that including female mice in experiments does not cause a relevant increase of data variability. Our results make a strong case for including mice of both sexes whenever open field or water-maze are used in preclinical research. © 2017 The Authors. American Journal of Medical Genetics Part C Published by Wiley Periodicals, Inc.
A Fock space representation for the quantum Lorentz gas
NASA Astrophysics Data System (ADS)
Maassen, H.; Tip, A.
1995-02-01
A Fock space representation is given for the quantum Lorentz gas, i.e., for random Schrödinger operators of the form H(ω)=p2+Vω=p2+∑ φ(x-xj(ω)), acting in H=L2(Rd), with Poisson distributed xjs. An operator H is defined in K=H⊗P=H⊗L2(Ω,P(dω))=L2(Ω,P(dω);H) by the action of H(ω) on its fibers in a direct integral decomposition. The stationarity of the Poisson process allows a unitarily equivalent description in terms of a new family {H(k)||k∈Rd}, where each H(k) acts in P [A. Tip, J. Math. Phys. 35, 113 (1994)]. The space P is then unitarily mapped upon the symmetric Fock space over L2(Rd,ρdx), with ρ the intensity of the Poisson process (the average number of points xj per unit volume; the scatterer density), and the equivalent of H(k) is determined. Averages now become vacuum expectation values and a further unitary transformation (removing ρ in ρdx) is made which leaves the former invariant. The resulting operator HF(k) has an interesting structure: On the nth Fock layer we encounter a single particle moving in the field of n scatterers and the randomness now appears in the coefficient √ρ in a coupling term connecting neighboring Fock layers. We also give a simple direct self-adjointness proof for HF(k), based upon Nelson's commutator theorem. Restriction to a finite number of layers (a kind of low scatterer density approximation) still gives nontrivial results, as is demonstrated by considering an example.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
A restricted proof that the weak equivalence principle implies the Einstein equivalence principle
NASA Technical Reports Server (NTRS)
Lightman, A. P.; Lee, D. L.
1973-01-01
Schiff has conjectured that the weak equivalence principle (WEP) implies the Einstein equivalence principle (EEP). A proof is presented of Schiff's conjecture, restricted to: (1) test bodies made of electromagnetically interacting point particles, that fall from rest in a static, spherically symmetric gravitational field; (2) theories of gravity within a certain broad class - a class that includes almost all complete relativistic theories that have been found in the literature, but with each theory truncated to contain only point particles plus electromagnetic and gravitational fields. The proof shows that every nonmentric theory in the class (every theory that violates EEP) must violate WEP. A formula is derived for the magnitude of the violation. It is shown that WEP is a powerful theoretical and experimental tool for constraining the manner in which gravity couples to electromagnetism in gravitation theories.
'Equivalence' and the translation and adaptation of health-related quality of life questionnaires.
Herdman, M; Fox-Rushby, J; Badia, X
1997-04-01
The increasing use of health-related quality of life (HRQOL) questionnaires in multinational studies has resulted in the translation of many existing measures. Guidelines for translation have been published, and there has been some discussion of how to achieve and assess equivalence between source and target questionnaires. Our reading in this area had led us, however, to the conclusion that different types of equivalence were not clearly defined, and that a theoretical framework for equivalence was lacking. To confirm this we reviewed definitions of equivalence in the HRQOL literature on the use of generic questionnaires in multicultural settings. The literature review revealed: definitions of 19 different types of equivalence; vague or conflicting definitions, particularly in the case of conceptual equivalence; and the use of many redundant terms. We discuss these findings in the light of a framework adapted from cross-cultural psychology for describing three different orientations to cross-cultural research: absolutism, universalism and relativism. We suggest that the HRQOL field has generally adopted an absolutist approach and that this may account for some of the confusion in this area. We conclude by suggesting that there is an urgent need for a standardized terminology within the HRQOL field, by offering a standard definition of conceptual equivalence, and by suggesting that the adoption of a universalist orientation would require substantial changes to guidelines and more empirical work on the conceptualization of HRQOL in different cultures.
A review on equivalent magnetic noise of magnetoelectric laminate sensors
Wang, Y. J.; Gao, J. Q.; Li, M. H.; Shen, Y.; Hasanyan, D.; Li, J. F.; Viehland, D.
2014-01-01
Since the turn of the millennium, multi-phase magnetoelectric (ME) composites have been subject to attention and development, and giant ME effects have been found in laminate composites of piezoelectric and magnetostrictive layers. From an application perspective, the practical usefulness of a magnetic sensor is determined not only by the output signal of the sensor in response to an incident magnetic field, but also by the equivalent magnetic noise generated in the absence of such an incident field. Here, a short review of developments in equivalent magnetic noise reduction for ME sensors is presented. This review focuses on internal noise, the analysis of the noise contributions and a summary of noise reduction strategies. Furthermore, external vibration noise is also discussed. The review concludes with an outlook on future possibilities and scientific challenges in the field of ME magnetic sensors. PMID:24421380
ERIC Educational Resources Information Center
Rinehart, Nicole J.; Bradshaw, John L.; Moss, Simon A.; Brereton, Avril V.; Tonge, Bruce J.
2006-01-01
The repetitive, stereotyped and obsessive behaviours, which are core diagnostic features of autism, are thought to be underpinned by executive dysfunction. This study examined executive impairment in individuals with autism and Asperger's disorder using a verbal equivalent of an established pseudo-random number generating task. Different patterns…
Equivalent source modeling of the core magnetic field using magsat data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Estes, R. H.
1983-01-01
Experiments are carried out on fitting the main field using different numbers of equivalent sources arranged in equal area at fixed radii at and inside the core-mantle boundary. In fixing the radius for a given series of runs, the convergence problems that result from the extreme nonlinearity of the problem when dipole positions are allowed to vary are avoided. Results are presented from a comparison between this approach and the standard spherical harmonic approach for modeling the main field in terms of accuracy and computational efficiency. The modeling of the main field with an equivalent dipole representation is found to be comparable to the standard spherical harmonic approach in accuracy. The 32 deg dipole density (42 dipoles) corresponds approximately to an eleventh degree/order spherical harmonic expansion (143 parameters), whereas the 21 dipole density (92 dipoles) corresponds to approximately a seventeenth degree and order expansion (323 parameters). It is pointed out that fixing the dipole positions results in rapid convergence of the dipole solutions for single-epoch models.
NASA Astrophysics Data System (ADS)
Fujibuchi, Toshioh; Kodaira, Satoshi; Sawaguchi, Fumiya; Abe, Yasuyuki; Obara, Satoshi; Yamaguchi, Masae; Kawashima, Hajime; Kitamura, Hisashi; Kurano, Mieko; Uchihori, Yukio; Yasuda, Nakahiro; Koguchi, Yasuhiro; Nakajima, Masaru; Kitamura, Nozomi; Sato, Tomoharu
2015-04-01
We measured the recoil charged particles from secondary neutrons produced by the photonuclear reaction in a water phantom from a 10-MV photon beam from medical linacs. The absorbed dose and the dose equivalent were evaluated from the linear energy transfer (LET) spectrum of recoils using the CR-39 plastic nuclear track detector (PNTD) based on well-established methods in the field of space radiation dosimetry. The contributions and spatial distributions of these in the phantom on nominal photon exposures were verified as the secondary neutron dose and neutron dose equivalent. The neutron dose equivalent normalized to the photon-absorbed dose was 0.261 mSv/100 MU at source to chamber distance 90 cm. The dose equivalent at the surface gave the highest value, and was attenuated to less than 10% at 5 cm from the surface. The dose contribution of the high LET component of ⩾100 keV/μm increased with the depth in water, resulting in an increase of the quality factor. The CR-39 PNTD is a powerful tool that can be used to systematically measure secondary neutron dose distributions in a water phantom from an in-field to out-of-field high-intensity photon beam.
NASA Astrophysics Data System (ADS)
Petric, Martin Peter
This thesis describes the development and implementation of a novel method for the dosimetric verification of intensity modulated radiation therapy (IMRT) fields with several advantages over current techniques. Through the use of a tissue equivalent plastic scintillator sheet viewed by a charge-coupled device (CCD) camera, this method provides a truly tissue equivalent dosimetry system capable of efficiently and accurately performing field-by-field verification of IMRT plans. This work was motivated by an initial study comparing two IMRT treatment planning systems. The clinical functionality of BrainLAB's BrainSCAN and Varian's Helios IMRT treatment planning systems were compared in terms of implementation and commissioning, dose optimization, and plan assessment. Implementation and commissioning revealed differences in the beam data required to characterize the beam prior to use with the BrainSCAN system requiring higher resolution data compared to Helios. This difference was found to impact on the ability of the systems to accurately calculate dose for highly modulated fields, with BrainSCAN being more successful than Helios. The dose optimization and plan assessment comparisons revealed that while both systems use considerably different optimization algorithms and user-control interfaces, they are both capable of producing substantially equivalent dose plans. The extensive use of dosimetric verification techniques in the IMRT treatment planning comparison study motivated the development and implementation of a novel IMRT dosimetric verification system. The system consists of a water-filled phantom with a tissue equivalent plastic scintillator sheet built into the top surface. Scintillation light is reflected by a plastic mirror within the phantom towards a viewing window where it is captured using a CCD camera. Optical photon spread is removed using a micro-louvre optical collimator and by deconvolving a glare kernel from the raw images. Characterization of this new dosimetric verification system indicates excellent dose response and spatial linearity, high spatial resolution, and good signal uniformity and reproducibility. Dosimetric results from square fields, dynamic wedged fields, and a 7-field head and neck IMRT treatment plan indicate good agreement with film dosimetry distributions. Efficiency analysis of the system reveals a 50% reduction in time requirements for field-by-field verification of a 7-field IMRT treatment plan compared to film dosimetry.
Bays, Harold E; Chen, Erluo; Tomassini, Joanne E; McPeters, Gail; Polis, Adam B; Triscari, Joseph
2015-04-01
Co-administration of ezetimibe with atorvastatin is a generally well-tolerated treatment option that reduces LDL-C levels and improves other lipids with greater efficacy than doubling the atorvastatin dose. The objective of the study was to demonstrate the equivalent lipid-modifying efficacy of fixed-dose combination (FDC) ezetimibe/atorvastatin compared with the component agents co-administered individually in support of regulatory filing. Two randomized, 6-week, double-blind cross-over trials compared the lipid-modifying efficacy of ezetimibe/atorvastatin 10/20 mg (n = 353) or 10/40 mg (n = 280) vs. separate co-administration of ezetimibe 10 mg plus atorvastatin 20 mg (n = 346) or 40 mg (n = 280), respectively, in hypercholesterolemic patients. Percent changes from baseline in LDL-C (primary endpoint) and other lipids (secondary endpoints) were assessed by analysis of covariance; triglycerides were evaluated by longitudinal-data analysis. Expected differences between FDC and the corresponding co-administered doses were predicted from a dose-response relationship model; sample size was estimated given the expected difference and equivalence margins (±4%). LDL-C-lowering equivalence was based on 97.5% expanded confidence intervals (CI) for the difference contained within the margins; equivalence margins for other lipids were not prespecified. Ezetimibe/atorvastatin FDC 10/20 mg was equivalent to co-administered ezetimibe+atorvastatin 20 mg in reducing LDL-C levels (54.0% vs. 53.8%) as was FDC 10/40 mg and ezetimibe+atorvastatin 40 mg (58.9% vs. 58.7%), as predicted by the model. Changes in other lipids were consistent with equivalence (97.5% expanded CIs <±3%, included 0); triglyceride changes varied more. All treatments were generally well tolerated. Hypercholesterolemic patients administered ezetimibe/atorvastatin 10/20 and 10/40 mg FDC had equivalent LDL-C lowering. This FDC formulation proved to be an efficacious and generally well-tolerated lipid-lowering therapy. © 2014 Société Française de Pharmacologie et de Thérapeutique.
The Equivalence Principle Experiment for Spin-Polarized Bodies
NASA Astrophysics Data System (ADS)
Hsieh, Chang-Huain; Jen, Pin-Yun; Ko, Kai-Li; Li, Keh-Yann; Ni, Wei-Tou; Pan, Sheau-Shi; Shih, Yung-Hui; Tyan, Rong-Jung
We perform an equivalence principle experiment for a magnetically shielded spin-polarized body of Dy6Fe23. We use a single-pan mass comparator to compare the spin-polarized body with an unpolarized group of masses. The equivalence of spin-up and spin-down positions is good to (1.1 ±7.8)×10-9 in earth gravitational field.
Herskind, Carsten; Griebel, Jürgen; Kraus-Tiefenbacher, Uta; Wenz, Frederik
2008-12-01
Accelerated partial breast radiotherapy with low-energy photons from a miniature X-ray machine is undergoing a randomized clinical trial (Targeted Intra-operative Radiation Therapy [TARGIT]) in a selected subgroup of patients treated with breast-conserving surgery. The steep radial dose gradient implies reduced tumor cell control with increasing depth in the tumor bed. The purpose was to compare the expected risk of local recurrence in this nonuniform radiation field with that after conventional external beam radiotherapy. The relative biologic effectiveness of low-energy photons was modeled using the linear-quadratic formalism including repair of sublethal lesions during protracted irradiation. Doses of 50-kV X-rays (Intrabeam) were converted to equivalent fractionated doses, EQD2, as function of depth in the tumor bed. The probability of local control was estimated using a logistic dose-response relationship fitted to clinical data from fractionated radiotherapy. The model calculations show that, for a cohort of patients, the increase in local control in the high-dose region near the applicator partly compensates the reduction of local control at greater distances. Thus a "sphere of equivalence" exists within which the risk of recurrence is equal to that after external fractionated radiotherapy. The spatial distribution of recurrences inside this sphere will be different from that after conventional radiotherapy. A novel target volume concept is presented here. The incidence of recurrences arising in the tumor bed around the excised tumor will test the validity of this concept and the efficacy of the treatment. Recurrences elsewhere will have implications for the rationale of TARGIT.
NASA Astrophysics Data System (ADS)
Smith, J. Torquil; Morrison, H. Frank; Doolittle, Lawrence R.; Tseng, Hung-Wen
2007-03-01
Equivalent dipole polarizabilities are a succinct way to summarize the inductive response of an isolated conductive body at distances greater than the scale of the body. Their estimation requires measurement of secondary magnetic fields due to currents induced in the body by time varying magnetic fields in at least three linearly independent (e.g., orthogonal) directions. Secondary fields due to an object are typically orders of magnitude smaller than the primary inducing fields near the primary field sources (transmitters). Receiver coils may be oriented orthogonal to primary fields from one or two transmitters, nulling their response to those fields, but simultaneously nulling to fields of additional transmitters is problematic. If transmitter coils are constructed symmetrically with respect to inversion in a point, their magnetic fields are symmetric with respect to that point. If receiver coils are operated in pairs symmetric with respect to inversion in the same point, then their differenced output is insensitive to the primary fields of any symmetrically constructed transmitters, allowing nulling to three (or more) transmitters. With a sufficient number of receivers pairs, object equivalent dipole polarizabilities can be estimated in situ from measurements at a single instrument sitting, eliminating effects of inaccurate instrument location on polarizability estimates. The method is illustrated with data from a multi-transmitter multi-receiver system with primary field nulling through differenced receiver pairs, interpreted in terms of principal equivalent dipole polarizabilities as a function of time.
Prabhu, Malavika; Clapp, Mark A; McQuaid-Hanson, Emily; Ona, Samsiya; OʼDonnell, Taylor; James, Kaitlyn; Bateman, Brian T; Wylie, Blair J; Barth, William H
2018-07-01
To evaluate whether a liposomal bupivacaine incisional block decreases postoperative pain and represents an opioid-minimizing strategy after scheduled cesarean delivery. In a single-blind, randomized controlled trial among opioid-naive women undergoing cesarean delivery, liposomal bupivacaine or placebo was infiltrated into the fascia and skin at the surgical site, before fascial closure. Using an 11-point numeric rating scale, the primary outcome was pain score with movement at 48 hours postoperatively. A sample size of 40 women per group was needed to detect a 1.5-point reduction in pain score in the intervention group. Pain scores and opioid consumption, in oral morphine milligram equivalents, at 48 hours postoperatively were summarized as medians (interquartile range) and compared using the Wilcoxon rank-sum test. Between March and September 2017, 249 women were screened, 103 women enrolled, and 80 women were randomized. One woman in the liposomal bupivacaine group was excluded after randomization as a result of a vertical skin incision, leaving 39 patients in the liposomal bupivacaine group and 40 in the placebo group. Baseline characteristics between groups were similar. The median (interquartile range) pain score with movement at 48 hours postoperatively was 4 (2-5) in the liposomal bupivacaine group and 3.5 (2-5.5) in the placebo group (P=.72). The median (interquartile range) opioid use was 37.5 (7.5-60) morphine milligram equivalents in the liposomal bupivacaine group and 37.5 (15-75) morphine milligram equivalents in the placebo group during the first 48 hours postoperatively (P=.44). Compared with placebo, a liposomal bupivacaine incisional block at the time of cesarean delivery resulted in similar postoperative pain scores in the first 48 hours postoperatively. ClinicalTrials.gov, NCT02959996.
Wik, Lars; Olsen, Jan-Aage; Persse, David; Sterz, Fritz; Lozano, Michael; Brouwer, Marc A; Westfall, Mark; Souders, Chris M; Malzer, Reinhard; van Grunsven, Pierre M; Travis, David T; Whitehead, Anne; Herken, Ulrich R; Lerner, E Brooke
2014-06-01
To compare integrated automated load distributing band CPR (iA-CPR) with high-quality manual CPR (M-CPR) to determine equivalence, superiority, or inferiority in survival to hospital discharge. Between March 5, 2009 and January 11, 2011 a randomized, unblinded, controlled group sequential trial of adult out-of-hospital cardiac arrests of presumed cardiac origin was conducted at three US and two European sites. After EMS providers initiated manual compressions patients were randomized to receive either iA-CPR or M-CPR. Patient follow-up was until all patients were discharged alive or died. The primary outcome, survival to hospital discharge, was analyzed adjusting for covariates, (age, witnessed arrest, initial cardiac rhythm, enrollment site) and interim analyses. CPR quality and protocol adherence were monitored (CPR fraction) electronically throughout the trial. Of 4753 randomized patients, 522 (11.0%) met post enrollment exclusion criteria. Therefore, 2099 (49.6%) received iA-CPR and 2132 (50.4%) M-CPR. Sustained ROSC (emergency department admittance), 24h survival and hospital discharge (unknown for 12 cases) for iA-CPR compared to M-CPR were 600 (28.6%) vs. 689 (32.3%), 456 (21.8%) vs. 532 (25.0%), 196 (9.4%) vs. 233 (11.0%) patients, respectively. The adjusted odds ratio of survival to hospital discharge for iA-CPR compared to M-CPR, was 1.06 (95% CI 0.83-1.37), meeting the criteria for equivalence. The 20 min CPR fraction was 80.4% for iA-CPR and 80.2% for M-CPR. Compared to high-quality M-CPR, iA-CPR resulted in statistically equivalent survival to hospital discharge. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.
TRANSFER OF AVERSIVE RESPONDENT ELICITATION IN ACCORDANCE WITH EQUIVALENCE RELATIONS
Valverde, Miguel RodrÍguez; Luciano, Carmen; Barnes-Holmes, Dermot
2009-01-01
The present study investigates the transfer of aversively conditioned respondent elicitation through equivalence classes, using skin conductance as the measure of conditioning. The first experiment is an attempt to replicate Experiment 1 in Dougher, Augustson, Markham, Greenway, and Wulfert (1994), with different temporal parameters in the aversive conditioning procedure employed. Match-to-sample procedures were used to teach 17 participants two 4-member equivalence classes. Then, one member of one class was paired with electric shock and one member of the other class was presented without shock. The remaining stimuli from each class were presented in transfer tests. Unlike the findings in the original study, transfer of conditioning was not achieved. In Experiment 2, similar procedures were used with 30 participants, although several modifications were introduced (formation of five-member classes, direct conditioning with several elements of each class, random sequences of stimulus presentation in transfer tests, reversal in aversive conditioning contingencies). More than 80% of participants who had shown differential conditioning also showed the transfer of function effect. Moreover, this effect was replicated within subjects for 3 participants. This is the first demonstration of the transfer of aversive respondent elicitation through stimulus equivalence classes with the presentation of transfer test trials in random order. The latter prevents the possibility that transfer effects are an artefact of transfer test presentation order. PMID:20119523
Conductivity of disordered 2d binodal Dirac electron gas: effect of internode scattering
NASA Astrophysics Data System (ADS)
Sinner, Andreas; Ziegler, Klaus
2018-07-01
We study the dc conductivity of a weakly disordered 2d Dirac electron gas with two bands and two spectral nodes, employing a field theoretical version of the Kubo-Greenwood conductivity formula. In this paper, we are concerned with the question how the internode scattering affects the conductivity. We use and compare two established techniques for treating the disorder scattering: The perturbation theory, there ladder and maximally crossed diagrams are summed up, and the functional integral approach. Both turn out to be entirely equivalent. For a large number of random potential configurations we have found only two different conductivity scenarios. Both scenarios appear independently of whether the disorder does or does not create the internode scattering. In particular, we do not confirm the conjecture that the internode scattering tends to Anderson localisation.
Combining remotely sensed and other measurements for hydrologic areal averages
NASA Technical Reports Server (NTRS)
Johnson, E. R.; Peck, E. L.; Keefer, T. N.
1982-01-01
A method is described for combining measurements of hydrologic variables of various sampling geometries and measurement accuracies to produce an estimated mean areal value over a watershed and a measure of the accuracy of the mean areal value. The method provides a means to integrate measurements from conventional hydrological networks and remote sensing. The resulting areal averages can be used to enhance a wide variety of hydrological applications including basin modeling. The correlation area method assigns weights to each available measurement (point, line, or areal) based on the area of the basin most accurately represented by the measurement. The statistical characteristics of the accuracy of the various measurement technologies and of the random fields of the hydrologic variables used in the study (water equivalent of the snow cover and soil moisture) required to implement the method are discussed.
Testing Moderating Detection Systems with {sup 252}Cf-Based Reference Neutron Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hertel, Nolan E.; Sweezy, Jeremy; Sauber, Jeremiah S.
Calibration measurements were carried out on a probe designed to measure ambient dose equivalent in accordance with ICRP Pub 60 recommendations. It consists of a cylindrical {sup 3}He proportional counter surrounded by a 25-cm-diameter spherical polyethylene moderator. Its neutron response is optimized for dose rate measurements of neutrons between thermal energies and 20 MeV. The instrument was used to measure the dose rate in four separate neutron fields: unmoderated {sup 252}Cf, D{sub 2}O-moderated {sup 252}Cf, polyethylene-moderated {sup 252}Cf, and WEP neutron howitzer with {sup 252}Cf at its center. Dose equivalent measurements were performed at source-detector centerline distances from 50 tomore » 200 cm. The ratio of air-scatter- and room-return-corrected ambient dose equivalent rates to ambient dose equivalent rates calculated with the code MCNP are tabulated.« less
Equivalence principle and quantum mechanics: quantum simulation with entangled photons.
Longhi, S
2018-01-15
Einstein's equivalence principle (EP) states the complete physical equivalence of a gravitational field and corresponding inertial field in an accelerated reference frame. However, to what extent the EP remains valid in non-relativistic quantum mechanics is a controversial issue. To avoid violation of the EP, Bargmann's superselection rule forbids a coherent superposition of states with different masses. Here we suggest a quantum simulation of non-relativistic Schrödinger particle dynamics in non-inertial reference frames, which is based on the propagation of polarization-entangled photon pairs in curved and birefringent optical waveguides and Hong-Ou-Mandel quantum interference measurement. The photonic simulator can emulate superposition of mass states, which would lead to violation of the EP.
A simple demonstration when studying the equivalence principle
NASA Astrophysics Data System (ADS)
Mayer, Valery; Varaksina, Ekaterina
2016-06-01
The paper proposes a lecture experiment that can be demonstrated when studying the equivalence principle formulated by Albert Einstein. The demonstration consists of creating stroboscopic photographs of a ball moving along a parabola in Earth's gravitational field. In the first experiment, a camera is stationary relative to Earth's surface. In the second, the camera falls freely downwards with the ball, allowing students to see that the ball moves uniformly and rectilinearly relative to the frame of reference of the freely falling camera. The equivalence principle explains this result, as it is always possible to propose an inertial frame of reference for a small region of a gravitational field, where space-time effects of curvature are negligible.
Equivalent circuit simulation of HPEM-induced transient responses at nonlinear loads
NASA Astrophysics Data System (ADS)
Kotzev, Miroslav; Bi, Xiaotang; Kreitlow, Matthias; Gronwald, Frank
2017-09-01
In this paper the equivalent circuit modeling of a nonlinearly loaded loop antenna and its transient responses to HPEM field excitations are investigated. For the circuit modeling the general strategy to characterize the nonlinearly loaded antenna by a linear and a nonlinear circuit part is pursued. The linear circuit part can be determined by standard methods of antenna theory and numerical field computation. The modeling of the nonlinear circuit part requires realistic circuit models of the nonlinear loads that are given by Schottky diodes. Combining both parts, appropriate circuit models are obtained and analyzed by means of a standard SPICE circuit simulator. It is the main result that in this way full-wave simulation results can be reproduced. Furthermore it is clearly seen that the equivalent circuit modeling offers considerable advantages with respect to computation speed and also leads to improved physical insights regarding the coupling between HPEM field excitation and nonlinearly loaded loop antenna.
Can quantum probes satisfy the weak equivalence principle?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Seveso, Luigi, E-mail: luigi.seveso@unimi.it; Paris, Matteo G.A.; INFN, Sezione di Milano, I-20133 Milano
We address the question whether quantum probes in a gravitational field can be considered as test particles obeying the weak equivalence principle (WEP). A formulation of the WEP is proposed which applies also in the quantum regime, while maintaining the physical content of its classical counterpart. Such formulation requires the introduction of a gravitational field not to modify the Fisher information about the mass of a freely-falling probe, extractable through measurements of its position. We discover that, while in a uniform field quantum probes satisfy our formulation of the WEP exactly, gravity gradients can encode nontrivial information about the particle’smore » mass in its wavefunction, leading to violations of the WEP. - Highlights: • Can quantum probes under gravity be approximated as test-bodies? • A formulation of the weak equivalence principle for quantum probes is proposed. • Quantum probes are found to violate it as a matter of principle.« less
A sparse equivalent source method for near-field acoustic holography.
Fernandez-Grande, Efren; Xenaki, Angeliki; Gerstoft, Peter
2017-01-01
This study examines a near-field acoustic holography method consisting of a sparse formulation of the equivalent source method, based on the compressive sensing (CS) framework. The method, denoted Compressive-Equivalent Source Method (C-ESM), encourages spatially sparse solutions (based on the superposition of few waves) that are accurate when the acoustic sources are spatially localized. The importance of obtaining a non-redundant representation, i.e., a sensing matrix with low column coherence, and the inherent ill-conditioning of near-field reconstruction problems is addressed. Numerical and experimental results on a classical guitar and on a highly reactive dipole-like source are presented. C-ESM is valid beyond the conventional sampling limits, making wide-band reconstruction possible. Spatially extended sources can also be addressed with C-ESM, although in this case the obtained solution does not recover the spatial extent of the source.
Herbal medicine for low back pain: a Cochrane review.
Gagnier, Joel J; van Tulder, Maurits W; Berman, Brian; Bombardier, Claire
2007-01-01
A systematic review of randomized controlled trials. To determine the effectiveness of herbal medicine compared with placebo, no intervention, or "standard/accepted/conventional treatments" for nonspecific low back pain. Low back pain is a common condition and a substantial economic burden in industrialized societies. A large proportion of patients with chronic low back pain use complementary and alternative medicine (CAM) and/or visit CAM practitioners. Several herbal medicines have been purported for use in low back pain. The following databases were searched: Medline (1966 to April 2003), Embase (1980 to April 2003), Cochrane Controlled Trials Register (Issue 1, 2003), and Cochrane Complementary Medicine (CM) field Trials Register. Additionally, reference lists in review articles, guidelines, and in the retrieved trials were checked. Randomized controlled trials (RCTs), using adults (>18 years of age) suffering from acute, subacute, or chronic nonspecific low back pain. Types of interventions included herbal medicines defined as a plant that is used for medicinal purposes in any form. Primary outcome measures were pain and function. Two reviewers (J.J.G. and M.W.T.) conducted electronic searches in all databases. One reviewer (J.J.G.) contacted content experts and acquired relevant citations. Authors, title, subject headings, publication type, and abstract of the isolated studies were downloaded or a hard copy was retrieved. Methodologic quality and clinical relevance were assessed separately by two individuals (J.J.G. and M.W.T.). Disagreements were resolved by consensus. Ten trials were included in this review. Two high-quality trials utilizing Harpagophytum procumbens (Devil's claw) found strong evidence for short-term improvements in pain and rescue medication for daily doses standardized to 50 mg or 100 mg harpagoside with another high-quality trial demonstrating relative equivalence to 12.5 mg per day of rofecoxib. Two moderate-quality trials utilizing Salix alba (White willow bark) found moderate evidence for short-term improvements in pain and rescue medication for daily doses standardized to 120 mg or 240 mg salicin with an additional trial demonstrating relative equivalence to 12.5 mg per day of rofecoxib. Three low-quality trials using Capsicum frutescens (Cayenne) using various topical preparations found moderate evidence for favorable results against placebo and one trial found equivalence to a homeopathic ointment. Harpagophytum procumbens, Salix alba, and Capsicum frutescens seem to reduce pain more than placebo. Additional trials testing these herbal medicines against standard treatments will clarify their equivalence in terms of efficacy. The quality of reporting in these trials was generally poor; thus, trialists should refer to the CONSORT statement in reporting clinical trials of herbal medicines.
Effective or ineffective: attribute framing and the human papillomavirus (HPV) vaccine.
Bigman, Cabral A; Cappella, Joseph N; Hornik, Robert C
2010-12-01
To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine-related intentions and policy opinions. A survey-based experiment (N=334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. These framing effects should be considered when designing messages. Copyright © 2010 Elsevier Ireland Ltd. All rights reserved.
Effective or ineffective: Attribute framing and the human papillomavirus (HPV) vaccine
Bigman, Cabral A.; Cappella, Joseph N.; Hornik, Robert C.
2010-01-01
Objectives To experimentally test whether presenting logically equivalent, but differently valenced effectiveness information (i.e. attribute framing) affects perceived effectiveness of the human papillomavirus (HPV) vaccine, vaccine related intentions and policy opinions. Method A survey-based experiment (N= 334) was fielded in August and September 2007 as part of a larger ongoing web-enabled monthly survey, the Annenberg National Health Communication Survey. Participants were randomly assigned to read a short passage about the HPV vaccine that framed vaccine effectiveness information in one of five ways. Afterward, they rated the vaccine and related opinion questions. Main statistical methods included ANOVA and t-tests. Results On average, respondents exposed to positive framing (70% effective) rated the HPV vaccine as more effective and were more supportive of vaccine mandate policy than those exposed to the negative frame (30% ineffective) or the control frame. Mixed valence frames showed some evidence for order effects; phrasing that ended by emphasizing vaccine ineffectiveness showed similar vaccine ratings to the negative frame. Conclusions The experiment finds that logically equivalent information about vaccine effectiveness not only influences perceived effectiveness, but can in some cases influence support for policies mandating vaccine use. Practice implications These framing effects should be considered when designing messages. PMID:20851560
NASA Astrophysics Data System (ADS)
Pavlichin, Dmitri S.; Mabuchi, Hideo
2014-06-01
Nanoscale integrated photonic devices and circuits offer a path to ultra-low power computation at the few-photon level. Here we propose an optical circuit that performs a ubiquitous operation: the controlled, random-access readout of a collection of stored memory phases or, equivalently, the computation of the inner product of a vector of phases with a binary selector" vector, where the arithmetic is done modulo 2pi and the result is encoded in the phase of a coherent field. This circuit, a collection of cascaded interferometers driven by a coherent input field, demonstrates the use of coherence as a computational resource, and of the use of recently-developed mathematical tools for modeling optical circuits with many coupled parts. The construction extends in a straightforward way to the computation of matrix-vector and matrix-matrix products, and, with the inclusion of an optical feedback loop, to the computation of a weighted" readout of stored memory phases. We note some applications of these circuits for error correction and for computing tasks requiring fast vector inner products, e.g. statistical classification and some machine learning algorithms.
Cardenas, Carlos E; Nitsch, Paige L; Kudchadker, Rajat J; Howell, Rebecca M; Kry, Stephen F
2016-07-08
Out-of-field doses from radiotherapy can cause harmful side effects or eventually lead to secondary cancers. Scattered doses outside the applicator field, neutron source strength values, and neutron dose equivalents have not been broadly investigated for high-energy electron beams. To better understand the extent of these exposures, we measured out-of-field dose characteristics of electron applicators for high-energy electron beams on two Varian 21iXs, a Varian TrueBeam, and an Elekta Versa HD operating at various energy levels. Out-of-field dose profiles and percent depth-dose curves were measured in a Wellhofer water phantom using a Farmer ion chamber. Neutron dose was assessed using a combination of moderator buckets and gold activation foils placed on the treatment couch at various locations in the patient plane on both the Varian 21iX and Elekta Versa HD linear accelerators. Our findings showed that out-of-field electron doses were highest for the highest electron energies. These doses typically decreased with increasing distance from the field edge but showed substantial increases over some distance ranges. The Elekta linear accelerator had higher electron out-of-field doses than the Varian units examined, and the Elekta dose profiles exhibited a second dose peak about 20 to 30 cm from central-axis, which was found to be higher than typical out-of-field doses from photon beams. Electron doses decreased sharply with depth before becoming nearly constant; the dose was found to decrease to a depth of approximately E(MeV)/4 in cm. With respect to neutron dosimetry, Q values and neutron dose equivalents increased with electron beam energy. Neutron contamination from electron beams was found to be much lower than that from photon beams. Even though the neutron dose equivalent for electron beams represented a small portion of neutron doses observed under photon beams, neutron doses from electron beams may need to be considered for special cases.
Assessment of out-of-field absorbed dose and equivalent dose in proton fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Clasie, Ben; Wroe, Andrew; Kooy, Hanne
2010-01-15
Purpose: In proton therapy, as in other forms of radiation therapy, scattered and secondary particles produce undesired dose outside the target volume that may increase the risk of radiation-induced secondary cancer and interact with electronic devices in the treatment room. The authors implement a Monte Carlo model of this dose deposited outside passively scattered fields and compare it to measurements, determine the out-of-field equivalent dose, and estimate the change in the dose if the same target volumes were treated with an active beam scanning technique. Methods: Measurements are done with a thimble ionization chamber and the Wellhofer MatriXX detector insidemore » a Lucite phantom with field configurations based on the treatment of prostate cancer and medulloblastoma. The authors use a GEANT4 Monte Carlo simulation, demonstrated to agree well with measurements inside the primary field, to simulate fields delivered in the measurements. The partial contributions to the dose are separated in the simulation by particle type and origin. Results: The agreement between experiment and simulation in the out-of-field absorbed dose is within 30% at 10-20 cm from the field edge and 90% of the data agrees within 2 standard deviations. In passive scattering, the neutron contribution to the total dose dominates in the region downstream of the Bragg peak (65%-80% due to internally produced neutrons) and inside the phantom at distances more than 10-15 cm from the field edge. The equivalent doses using 10 for the neutron weighting factor at the entrance to the phantom and at 20 cm from the field edge are 2.2 and 2.6 mSv/Gy for the prostate cancer and cranial medulloblastoma fields, respectively. The equivalent dose at 15-20 cm from the field edge decreases with depth in passive scattering and increases with depth in active scanning. Therefore, active scanning has smaller out-of-field equivalent dose by factors of 30-45 in the entrance region and this factor decreases with depth. Conclusions: The dose deposited immediately downstream of the primary field, in these cases, is dominated by internally produced neutrons; therefore, scattered and scanned fields may have similar risk of second cancer in this region. The authors confirm that there is a reduction in the out-of-field dose in active scanning but the effect decreases with depth. GEANT4 is suitable for simulating the dose deposited outside the primary field. The agreement with measurements is comparable to or better than the agreement reported for other implementations of Monte Carlo models. Depending on the position, the absorbed dose outside the primary field is dominated by contributions from primary protons that may or may not have scattered in the brass collimating devices. This is noteworthy as the quality factor of the low LET protons is well known and the relative dose risk in this region can thus be assessed accurately.« less
Photodegradation of clothianidin under simulated California rice field conditions.
Mulligan, Rebecca A; Redman, Zachary C; Keener, Megan R; Ball, David B; Tjeerdema, Ronald S
2016-07-01
Photodegradation can be a major route of dissipation for pesticides applied to shallow rice field water, leading to diminished persistence and reducing the risk of offsite transport. The objective of this study was to characterize the aqueous-phase photodegradation of clothianidin under simulated California rice field conditions. Photodegradation of clothianidin was characterized in deionized, Sacramento River and rice field water samples. Pseudo-first-order rate constants and DT50 values in rice field water (mean k = 0.0158 min(-1) ; mean DT50 = 18.0 equivalent days) were significantly slower than in deionized water (k = 0.0167 min(-1) ; DT50 = 14.7 equivalent days) and river water (k = 0.0146 min(-1) ; DT50 = 16.6 equivalent days) samples. Quantum yield ϕc values demonstrate that approximately 1 and 0.5% of the light energy absorbed results in photochemical transformation in pure and field water respectively. Concentrations of the photodegradation product thiazolymethylurea in aqueous photolysis samples were determined using liquid chromatography-tandem mass spectrometry and accounted for ≤17% in deionized water and ≤8% in natural water. Photodegradation rates of clothianidin in flooded rice fields will be controlled by turbidity and light attenuation. Aqueous-phase photodegradation may reduce the risk of offsite transport of clothianidin from flooded rice fields (via drainage) and mitigate exposure to non-target organisms. © 2015 Society of Chemical Industry. © 2015 Society of Chemical Industry.
Wolfson, Julia A; Graham, Dan J; Bleich, Sara N
2017-01-01
Investigate attention to Nutrition Facts Labels (NFLs) with numeric only vs both numeric and activity-equivalent calorie information, and attitudes toward activity-equivalent calories. An eye-tracking camera monitored participants' viewing of NFLs for 64 packaged foods with either standard NFLs or modified NFLs. Participants self-reported demographic information and diet-related attitudes and behaviors. Participants came to the Behavioral Medicine Lab at Colorado State University in spring, 2015. The researchers randomized 234 participants to view NFLs with numeric calorie information only (n = 108) or numeric and activity-equivalent calorie information (n = 126). Attention to and attitudes about activity-equivalent calorie information. Differences by experimental condition and weight loss intention (overall and within experimental condition) were assessed using t tests and Pearson's chi-square tests of independence. Overall, participants viewed numeric calorie information on 20% of NFLs for 249 ms. Participants in the modified NFL condition viewed activity-equivalent information on 17% of NFLs for 231 ms. Most participants indicated that activity-equivalent calorie information would help them decide whether to eat a food (69%) and that they preferred both numeric and activity-equivalent calorie information on NFLs (70%). Participants used activity-equivalent calorie information on NFLs and found this information helpful for making food decisions. Copyright © 2016 Society for Nutrition Education and Behavior. Published by Elsevier Inc. All rights reserved.
NASA Astrophysics Data System (ADS)
Lancellotti, V.; de Hon, B. P.; Tijhuis, A. G.
2011-08-01
In this paper we present the application of linear embedding via Green's operators (LEGO) to the solution of the electromagnetic scattering from clusters of arbitrary (both conducting and penetrable) bodies randomly placed in a homogeneous background medium. In the LEGO method the objects are enclosed within simple-shaped bricks described in turn via scattering operators of equivalent surface current densities. Such operators have to be computed only once for a given frequency, and hence they can be re-used to perform the study of many distributions comprising the same objects located in different positions. The surface integral equations of LEGO are solved via the Moments Method combined with Adaptive Cross Approximation (to save memory) and Arnoldi basis functions (to compress the system). By means of purposefully selected numerical experiments we discuss the time requirements with respect to the geometry of a given distribution. Besides, we derive an approximate relationship between the (near-field) accuracy of the computed solution and the number of Arnoldi basis functions used to obtain it. This result endows LEGO with a handy practical criterion for both estimating the error and keeping it in check.
Nonlocal torque operators in ab initio theory of the Gilbert damping in random ferromagnetic alloys
NASA Astrophysics Data System (ADS)
Turek, I.; Kudrnovský, J.; Drchal, V.
2015-12-01
We present an ab initio theory of the Gilbert damping in substitutionally disordered ferromagnetic alloys. The theory rests on introduced nonlocal torques which replace traditional local torque operators in the well-known torque-correlation formula and which can be formulated within the atomic-sphere approximation. The formalism is sketched in a simple tight-binding model and worked out in detail in the relativistic tight-binding linear muffin-tin orbital method and the coherent potential approximation (CPA). The resulting nonlocal torques are represented by nonrandom, non-site-diagonal, and spin-independent matrices, which simplifies the configuration averaging. The CPA-vertex corrections play a crucial role for the internal consistency of the theory and for its exact equivalence to other first-principles approaches based on the random local torques. This equivalence is also illustrated by the calculated Gilbert damping parameters for binary NiFe and FeCo random alloys, for pure iron with a model atomic-level disorder, and for stoichiometric FePt alloys with a varying degree of L 10 atomic long-range order.
An equivalent layer magnetization model for the United States derived from MAGSAT data
NASA Technical Reports Server (NTRS)
Mayhew, M. A.; Galliher, S. C. (Principal Investigator)
1982-01-01
Long wavelength anomalies in the total magnetic field measured field measured by MAGSAT over the United States and adjacent areas are inverted to an equivalent layer crustal magnetization distribution. The model is based on an equal area dipole grid at the Earth's surface. Model resolution having physical significance, is about 220 km for MAGSAT data in the elevation range 300-500 km. The magnetization contours correlate well with large-scale tectonic provinces.
On the Helicity of Open Magnetic Fields
NASA Astrophysics Data System (ADS)
Prior, C.; Yeates, A. R.
2014-06-01
We reconsider the topological interpretation of magnetic helicity for magnetic fields in open domains, and relate this to the relative helicity. Specifically, our domains stretch between two parallel planes, and each of these ends may be magnetically open. It is demonstrated that, while the magnetic helicity is gauge-dependent, its value in any gauge may be physically interpreted as the average winding number among all pairs of field lines with respect to some orthonormal frame field. In fact, the choice of gauge is equivalent to the choice of reference field in the relative helicity, meaning that the magnetic helicity is no less physically meaningful. We prove that a particular gauge always measures the winding with respect to a fixed frame, and propose that this is normally the best choice. For periodic fields, this choice is equivalent to measuring relative helicity with respect to a potential reference field. However, for aperiodic fields, we show that the potential field can be twisted. We prove by construction that there always exists a possible untwisted reference field.
On Theoretical Broadband Shock-Associated Noise Near-Field Cross-Spectra
NASA Technical Reports Server (NTRS)
Miller, Steven A. E.
2015-01-01
The cross-spectral acoustic analogy is used to predict auto-spectra and cross-spectra of broadband shock-associated noise in the near-field and far-field from a range of heated and unheated supersonic off-design jets. A single equivalent source model is proposed for the near-field, mid-field, and far-field terms, that contains flow-field statistics of the shock wave shear layer interactions. Flow-field statistics are modeled based upon experimental observation and computational fluid dynamics solutions. An axisymmetric assumption is used to reduce the model to a closed-form equation involving a double summation over the equivalent source at each shock wave shear layer interaction. Predictions are compared with a wide variety of measurements at numerous jet Mach numbers and temperature ratios from multiple facilities. Auto-spectral predictions of broadband shock-associated noise in the near-field and far-field capture trends observed in measurement and other prediction theories. Predictions of spatial coherence of broadband shock-associated noise accurately capture the peak coherent intensity, frequency, and spectral width.
Ngamukote, Sathaporn; Khannongpho, Teerawat; Siriwatanapaiboon, Marent; Sirikwanpong, Sukrit; Dahlan, Winai; Adisakwattana, Sirichai
2016-12-29
To investigate the effect of Moringa Oleifera leaf extract (MOLE) on plasma glucose concentration and antioxidant status in healthy volunteers. A randomized crossover design was used in this study. Healthy volunteers were randomly assigned to receive either 200 mL of warm water (10 cases) or 200 mL of MOLE (500 mg dried extract, 10 cases). Blood samples were drawn at 0, 30, 60, 90, and 120 min for measuring fasting plasma glucose (FPG), ferric reducing ability of plasma (FRAP), Trolox equivalent antioxidant capacity (TEAC) and malondialdehyde (MDA). FPG concentration was not signifificantly different between warm water and MOLE. The consumption of MOLE acutely improved both FRAP and TEAC, with increases after 30 min of 30 μmol/L FeSO 4 equivalents and 0.18 μmol/L Trolox equivalents, respectively. The change in MDA level from baseline was signifificantly lowered after the ingestion of MOLE at 30, 60, and 90 min. In addition, FRAP level was negatively correlated with plasma MDA level after an intake of MOLE. MOLE increased plasma antioxidant capacity without hypoglycemia in human. The consumption of MOLE may reduce the risk factors associated with chronic degenerative diseases.
NASA Astrophysics Data System (ADS)
Fletcher, Stephen; Kirkpatrick, Iain; Dring, Roderick; Puttock, Robert; Thring, Rob; Howroyd, Simon
2017-03-01
Supercapacitors are an emerging technology with applications in pulse power, motive power, and energy storage. However, their carbon electrodes show a variety of non-ideal behaviours that have so far eluded explanation. These include Voltage Decay after charging, Voltage Rebound after discharging, and Dispersed Kinetics at long times. In the present work, we establish that a vertical ladder network of RC components can reproduce all these puzzling phenomena. Both software and hardware realizations of the network are described. In general, porous carbon electrodes contain random distributions of resistance R and capacitance C, with a wider spread of log R values than log C values. To understand what this implies, a simplified model is developed in which log R is treated as a Gaussian random variable while log C is treated as a constant. From this model, a new family of equivalent circuits is developed in which the continuous distribution of log R values is replaced by a discrete set of log R values drawn from a geometric series. We call these Pascal Equivalent Circuits. Their behaviour is shown to resemble closely that of real supercapacitors. The results confirm that distributions of RC time constants dominate the behaviour of real supercapacitors.
Helmholtz and Gibbs ensembles, thermodynamic limit and bistability in polymer lattice models
NASA Astrophysics Data System (ADS)
Giordano, Stefano
2017-12-01
Representing polymers by random walks on a lattice is a fruitful approach largely exploited to study configurational statistics of polymer chains and to develop efficient Monte Carlo algorithms. Nevertheless, the stretching and the folding/unfolding of polymer chains within the Gibbs (isotensional) and the Helmholtz (isometric) ensembles of the statistical mechanics have not been yet thoroughly analysed by means of the lattice methodology. This topic, motivated by the recent introduction of several single-molecule force spectroscopy techniques, is investigated in the present paper. In particular, we analyse the force-extension curves under the Gibbs and Helmholtz conditions and we give a proof of the ensembles equivalence in the thermodynamic limit for polymers represented by a standard random walk on a lattice. Then, we generalize these concepts for lattice polymers that can undergo conformational transitions or, equivalently, for chains composed of bistable or two-state elements (that can be either folded or unfolded). In this case, the isotensional condition leads to a plateau-like force-extension response, whereas the isometric condition causes a sawtooth-like force-extension curve, as predicted by numerous experiments. The equivalence of the ensembles is finally proved also for lattice polymer systems exhibiting conformational transitions.
Abou-Taleb, W M; Hassan, M H; El Mallah, E A; Kotb, S M
2018-05-01
Photoneutron production, and the dose equivalent, in the head assembly of the 15 MV Elekta Precise medical linac; operating in the faculty of Medicine at Alexandria University were estimated with the MCNP5 code. Photoneutron spectra were calculated in air and inside a water phantom to different depths as a function of the radiation field sizes. The maximum neutron fluence is 3.346×10 -9 n/cm 2 -e for a 30×30 cm 2 field size to 2-4 cm-depth in the phantom. The dose equivalent due to fast neutron increases as the field size increases, being a maximum of 0.912 ± 0.05 mSv/Gy at depth between 2 and 4 cm in the water phantom for 40×40 cm 2 field size. Photoneutron fluence and dose equivalent are larger to 100 cm from the isocenter than to 35 cm from the treatment room wall. Copyright © 2018 Elsevier Ltd. All rights reserved.
USDA-ARS?s Scientific Manuscript database
Dietary recommendations suggest decreased consumption of SFA to minimize CVD risk; however, not all foods rich in SFA are equivalent. To evaluate the effects of SFA in a dairy food matrix, as Cheddar cheese, v. SFA from a vegan-alternative test meal on postprandial inflammatory markers, a randomized...
A random forest learning assisted "divide and conquer" approach for peptide conformation search.
Chen, Xin; Yang, Bing; Lin, Zijing
2018-06-11
Computational determination of peptide conformations is challenging as it is a problem of finding minima in a high-dimensional space. The "divide and conquer" approach is promising for reliably reducing the search space size. A random forest learning model is proposed here to expand the scope of applicability of the "divide and conquer" approach. A random forest classification algorithm is used to characterize the distributions of the backbone φ-ψ units ("words"). A random forest supervised learning model is developed to analyze the combinations of the φ-ψ units ("grammar"). It is found that amino acid residues may be grouped as equivalent "words", while the φ-ψ combinations in low-energy peptide conformations follow a distinct "grammar". The finding of equivalent words empowers the "divide and conquer" method with the flexibility of fragment substitution. The learnt grammar is used to improve the efficiency of the "divide and conquer" method by removing unfavorable φ-ψ combinations without the need of dedicated human effort. The machine learning assisted search method is illustrated by efficiently searching the conformations of GGG/AAA/GGGG/AAAA/GGGGG through assembling the structures of GFG/GFGG. Moreover, the computational cost of the new method is shown to increase rather slowly with the peptide length.
CA II K-line metallicity indicator for field RR Lyrae stars
NASA Astrophysics Data System (ADS)
Clementini, Gisella; Tosi, Monica; Merighi, Roberto
In order to check and, possibly, improve the Preston's Delta S calibration scale, CCD spectra have been obtained for 25 field RR Lyrae variables. Eleven of the program stars have values of (Fe/H) derived by Butler and Deming (1979) from the Fe II lines' strength. For them we find that the equivalent width of the Ca II K line is extremely well correlated to the (Fe/H) values, the best fit relation being: (Fe/H) = 0.43W(K) - 2.75 where W(K) is the equivalent width of the K line. We conclude that the use of the K line equivalent width is at present the best method to derive the (Fe/H) abundance of the RR Lyrae stars.
Response of six neutron survey meters in mixed fields of fast and thermal neutrons.
Kim, S I; Kim, B H; Chang, I; Lee, J I; Kim, J L; Pradhan, A S
2013-10-01
Calibration neutron fields have been developed at KAERI (Korea Atomic Energy Research Institute) to study the responses of commonly used neutron survey meters in the presence of fast neutrons of energy around 10 MeV. The neutron fields were produced by using neutrons from the (241)Am-Be sources held in a graphite pile and a DT neutron generator. The spectral details and the ambient dose equivalent rates of the calibration fields were established, and the responses of six neutron survey meters were evaluated. Four single-moderator-based survey meters exhibited an under-responses ranging from ∼9 to 55 %. DINEUTRUN, commonly used in fields around nuclear reactors, exhibited an over-response by a factor of three in the thermal neutron field and an under-response of ∼85 % in the mixed fields. REM-500 (tissue-equivalent proportional counter) exhibited a response close to 1.0 in the fast neutron fields and an under-response of ∼50 % in the thermal neutron field.
Equivalence Testing as a Tool for Fatigue Risk Management in Aviation.
Wu, Lora J; Gander, Philippa H; van den Berg, Margo; Signal, T Leigh
2018-04-01
Many civilian aviation regulators favor evidence-based strategies that go beyond hours-of-service approaches for managing fatigue risk. Several countries now allow operations to be flown outside of flight and duty hour limitations, provided airlines demonstrate an alternative method of compliance that yields safety levels "at least equivalent to" the prescriptive regulations. Here we discuss equivalence testing in occupational fatigue risk management. We present suggested ratios/margins of practical equivalence when comparing operations inside and outside of prescriptive regulations for two common aviation safety performance indicators: total in-flight sleep duration and psychomotor vigilance task reaction speed. Suggested levels of practical equivalence, based on expertise coupled with evidence from field and laboratory studies, are ≤ 30 min in-flight sleep and ± 15% of reference response speed. Equivalence testing is illustrated in analyses of a within-subjects field study during an out-and-back long-range trip. During both sectors of their trip, 41 pilots were monitored via actigraphy, sleep diary, and top of descent psychomotor vigilance task. Pilots were assigned to take rest breaks in a standard lie-flat bunk on one sector and in a bunk tapered 9 from hip to foot on the other sector. Total in-flight sleep duration (134 ± 53 vs. 135 ± 55 min) and mean reaction speed at top of descent (3.94 ± 0.58 vs. 3.77 ± 0.58) were equivalent after rest in the full vs. tapered bunk. Equivalence testing is a complimentary statistical approach to difference testing when comparing levels of fatigue and performance in occupational settings and can be applied in transportation policy decision making.Wu LJ, Gander PH, van den Berg M, Signal TL. Equivalence testing as a tool for fatigue risk management in aviation. Aerosp Med Hum Perform. 2018; 89(4):383-388.
NASA Astrophysics Data System (ADS)
Liu, Haitao
The objective of the present study is to investigate damage mechanisms and thermal residual stresses of composites, and to establish the frameworks to model the particle-reinforced metal matrix composites with particle-matrix interfacial debonding, particle cracking or thermal residual stresses. An evolutionary interfacial debonding model is proposed for the composites with spheroidal particles. The construction of the equivalent stiffness is based on the fact that when debonding occurs in a certain direction, the load-transfer ability will lose in that direction. By using this equivalent method, the interfacial debonding problem can be converted into a composite problem with perfectly bonded inclusions. Considering the interfacial debonding is a progressive process in which the debonding area increases in proportion to external loading, a progressive interfacial debonding model is proposed. In this model, the relation between external loading and the debonding area is established using a normal stress controlled debonding criterion. Furthermore, an equivalent orthotropic stiffness tensor is constructed based on the debonding areas. This model is able to study the composites with randomly distributed spherical particles. The double-inclusion theory is recalled to model the particle cracking problems. Cracks inside particles are treated as penny-shape particles with zero stiffness. The disturbed stress field due to the existence of a double-inclusion is expressed explicitly. Finally, a thermal mismatch eigenstrain is introduced to simulate the inconsistent expansions of the matrix and the particles due to the difference of the coefficients of thermal expansion. Micromechanical stress and strain fields are calculated due to the combination of applied external loads and the prescribed thermal mismatch eigenstrains. For all of the above models, ensemble-volume averaging procedures are employed to derive the effective yield function of the composites. Numerical simulations are performed to analyze the effects of various parameters and several good agreements between our model's predictions and experimental results are obtained. It should be mentioned that all of expressions in the frameworks are explicitly derived and these analytical results are easy to be adopted in other related investigations.
Allen, Megan; Leslie, Kate; Hebbard, Geoffrey; Jones, Ian; Mettho, Tejinder; Maruff, Paul
2015-11-01
This study aimed to determine if the incidence of recall was equivalent between light and deep sedation for colonoscopy. Secondary analysis included complications, patient clinical recovery, and post-procedure cognitive impairment. Two hundred patients undergoing elective outpatient colonoscopy were randomized to light (bispectral index [BIS] 70-80) or deep (BIS < 60) sedation with propofol and fentanyl. Recall was assessed by the modified Brice questionnaire, and cognition at baseline and discharge was assessed using a Cogstate test battery. The median (interquartile range [IQR]) BIS values were different in the two groups (69 [65-74] light sedation vs 53 [46-59] deep sedation; P < 0.0001). The incidence of recall was 12% in the light sedation group and 1% in the deep sedation group. The risk difference for recall was 0.11 (90% confidence interval, 0.05 to 0.17) in the intention-to-treat analysis, thus refuting equivalence in recall between light and deep sedation (0.05 significance level; 10% equivalence margin). Overall sedation-related complications were more frequent with deep sedation than with light sedation (66% vs 47%, respectively; P = 0.008). Recovery was more rapid with light sedation than with deep sedation as determined by the mean (SD) time to reach a score of 5 on the Modified Observer's Assessment of Alertness/Sedation Scale [3 (4) min vs 7 (4) min, respectively; P < 0.001] and by the median [IQR] time to readiness for hospital discharge (65 [57-80] min vs 74 [63-86] min, respectively; P = 0.001). The incidence of post-procedural cognitive impairment was similar in those randomized to light (19%) vs deep (16%) sedation (P = 0.554). Light sedation was not equivalent to deep sedation for procedural recall, the spectrum of complications, or recovery times. This study provides evidence to inform discussions with patients about sedation for colonoscopy. This trial was registered at the Australian and New Zealand Clinical Trials Registry, number 12611000320954.
Day, Frank C.; Srinivasan, Malathi; Der-Martirosian, Claudia; Griffin, Erin; Hoffman, Jerome R.; Wilkes, Michael S.
2014-01-01
Purpose Few studies have compared the effect of web-based eLearning versus small-group learning on medical student outcomes. Palliative and end-of-life (PEOL) education is ideal for this comparison, given uneven access to PEOL experts and content nationally. Method In 2010, the authors enrolled all third-year medical students at the University of California, Davis School of Medicine into a quasi-randomized controlled trial of web-based interactive education (eDoctoring) compared to small-group education (Doctoring) on PEOL clinical content over two months. All students participated in three 3-hour PEOL sessions with similar content. Outcomes included a 24-item PEOL-specific self-efficacy scale with three domains (diagnosis/treatment [Cronbach’s alpha = 0.92, CI: 0.91–0.93], communication/prognosis [alpha = 0.95; CI: 0.93–0.96], and social impact/self-care [alpha = 0.91; CI: 0.88–0.92]); eight knowledge items; ten curricular advantage/disadvantages, and curricular satisfaction (both students and faculty). Results Students were randomly assigned to web-based eDoctoring (n = 48) or small-group Doctoring (n = 71) curricula. Self-efficacy and knowledge improved equivalently between groups: e.g., prognosis self-efficacy, 19%; knowledge, 10–42%. Student and faculty ratings of the web-based eDoctoring curriculum and the small group Doctoring curriculum were equivalent for most goals, and overall satisfaction was equivalent for each, with a trend towards decreased eDoctoring student satisfaction. Conclusions Findings showed equivalent gains in self-efficacy and knowledge between students participating in a web-based PEOL curriculum, in comparison to students learning similar content in a small-group format. Web-based curricula can standardize content presentation when local teaching expertise is limited, but may lead to decreased user satisfaction. PMID:25539518
Muhammad, Amber A.; Shafiq, Yasir; Shah, Saima; Kumar, Naresh; Ahmed, Imran; Azam, Iqbal; Pasha, Omrana; Zaidi, Anita K. M.
2017-01-01
Abstract Background. Integrated Management of Childhood Illness recommends that young infants with isolated fast breathing be referred to a hospital for antibiotic treatment, which is often impractical in resource-limited settings. Additionally, antibiotics may be unnecessary for physiologic tachypnea in otherwise well newborns. We tested the hypothesis that ambulatory treatment with oral amoxicillin for 7 days was equivalent (similarity margin of 3%) to placebo in young infants with isolated fast breathing in primary care settings where hospital referral is often unfeasible. Methods. This randomized equivalence trial was conducted in 4 primary health centers of Karachi, Pakistan. Infants presenting with isolated fast breathing and oxygen saturation ≥90% were randomly assigned to receive either oral amoxicillin or placebo twice daily for 7 days. Enrolled infants were followed on days 1–8, 11, and 14. The primary outcome was treatment failure by day 8, analyzed per protocol. The trial was stopped by the data safety monitoring board due to higher treatment failure rate and the occurrence of 2 deaths in the placebo arm in an interim analysis. Results. Four hundred twenty-three infants fulfilled per protocol criteria in the amoxicillin arm and 426 in the placebo arm. Twelve infants (2.8%) had treatment failure in the amoxicillin arm and 25 (5.9%) in the placebo arm (risk difference, 3.1; P value .04). Two infants in the placebo arm died, whereas no deaths occurred in the amoxicillin arm. Other adverse outcomes, as well as the proportions of relapse, were evenly distributed across both study arms. Conclusions. This trial failed to show equivalence of placebo to amoxicillin in the management of isolated fast breathing without hypoxemia or other clinical signs of illness in term young infants. Clinical Trials Registration. NCT01533818. PMID:27941119
Briand, Valérie; Bottero, Julie; Noël, Harold; Masse, Virginie; Cordel, Hugues; Guerra, José; Kossou, Hortense; Fayomi, Benjamin; Ayemonna, Paul; Fievet, Nadine; Massougbodji, Achille; Cot, Michel
2009-09-15
In the context of the increasing resistance to sulfadoxine-pyrimethamine (SP), we evaluated the efficacy of mefloquine (MQ) for intermittent preventive treatment during pregnancy (IPTp). A multicenter, open-label equivalence trial was conducted in Benin from July 2005 through April 2008. Women of all gravidities were randomized to receive SP (1500 mg of sulfadoxine and 75 mg of pyrimethamine) or 15 mg/kg MQ in a single intake twice during pregnancy. The primary end point was the proportion of low-birth-weight (LBW) infants (body weight, <2500 g; equivalence margin, 5%). A total of 1601 women were randomized to receive MQ (n=802)or SP (n=799).In the modified intention-to-treat analysis, which assessed only live singleton births, 59 (8%) of 735 women who were given MQ and 72 (9.8%) of 730 women who were given SP gave birth to LBW infants (difference between low birth weights in treatment groups, -1.8%; 95% confidence interval [CI], -4.8% to 1.1%]), establishing equivalence between the drugs. The per-protocol analysis showed consistent results. MQ was more efficacious than SP in preventing placental malaria (prevalence, 1.7% vs 4.4% of women; P = .005),clinical malaria (incidence rate, 26 cases/10,000 person-months vs. 68 cases/10,000 person-months; P = .007) and maternal anemia at delivery (as defined by a hemoglobin level <10 g/dL) (prevalence, 16% vs 20%; marginally significant at P = .09). Adverse events (mainly vomiting, dizziness, tiredness, and nausea) were more commonly associated with the use of MQ (prevalence, 78% vs 32%; P < 10(-3)) One woman in the MQ group had severe neuropsychiatric symptoms. MQ proved to be highly efficacious--both clinically and parasitologically--for use as IPTp. However, its low tolerability might impair its effectiveness and requires further investigations.
Equivalent of a cartilage tissue for simulations of laser-induced temperature fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kondyurin, A V; Sviridov, A P
2008-07-31
The thermal and optical properties of polyacrylamide hydrogels and cartilages are studied by the method of IR laser radiometry. The thermal diffusivity, heat capacity, and the effective absorption coefficient at a wavelength of 1.56 {mu}m measured for polyacrylamide gel with 70% water content and the degree of cross-linking 1:9 and for the nasal septum cartilage proved to be close. This allows the use of polyacrylamide hydrogels as equivalents of cartilages in simulations of laser-induced temperature fields. (biophotonics)
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung; Lee, Seung-Hwan
2017-08-01
Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE's effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. © The Author (2017). Published by Oxford University Press.
Jung, Wookyoung; Kang, Joong-Gu; Jeon, Hyeonjin; Shim, Miseon; Sun Kim, Ji; Leem, Hyun-Sung
2017-01-01
Abstract Faces are processed best when they are presented in the left visual field (LVF), a phenomenon known as LVF superiority. Although one eye contributes more when perceiving faces, it is unclear how the dominant eye (DE), the eye we unconsciously use when performing a monocular task, affects face processing. Here, we examined the influence of the DE on the LVF superiority for faces using event-related potentials. Twenty left-eye-dominant (LDE group) and 23 right-eye-dominant (RDE group) participants performed the experiments. Face stimuli were randomly presented in the LVF or right visual field (RVF). The RDE group exhibited significantly larger N170 amplitudes compared with the LDE group. Faces presented in the LVF elicited N170 amplitudes that were significantly more negative in the RDE group than they were in the LDE group, whereas the amplitudes elicited by stimuli presented in the RVF were equivalent between the groups. The LVF superiority was maintained in the RDE group but not in the LDE group. Our results provide the first neural evidence of the DE’s effects on the LVF superiority for faces. We propose that the RDE may be more biologically specialized for face processing. PMID:28379584
A HiPIMS plasma source with a magnetic nozzle that accelerates ions: application in a thruster
NASA Astrophysics Data System (ADS)
Bathgate, Stephen N.; Ganesan, Rajesh; Bilek, Marcela M. M.; McKenzie, David R.
2017-01-01
We demonstrate a solid fuel electrodeless ion thruster that uses a magnetic nozzle to collimate and accelerate copper ions produced by a high power impulse magnetron sputtering discharge (HiPIMS). The discharge is initiated using argon gas but in a practical device the consumption of argon could be minimised by exploiting the self-sputtering of copper. The ion fluence produced by the HiPIMS discharge was measured with a retarding field energy analyzer (RFEA) as a function of the magnetic field strength of the nozzle. The ion fraction of the copper was determined from the deposition rate of copper as a function of substrate bias and was found to exceed 87%. The ion fluence and ion energy increased in proportion with the magnetic field of the nozzle and the energy of the ions was found to follow a Maxwell-Boltzmann distribution with a directed velocity. The effectiveness of the magnetic nozzle in converting the randomized thermal motion of the ions into a jet was demonstrated from the energy distribution of the ions. The maximum ion exhaust velocity of at least 15.1 km/s, equivalent to a specific impulse of 1543 s was measured which is comparable to existing Hall thrusters and exceeds that of Teflon pulsed plasma thrusters.
Evaluation of the NCPDP Structured and Codified Sig Format for e-prescriptions.
Liu, Hangsheng; Burkhart, Q; Bell, Douglas S
2011-01-01
To evaluate the ability of the structure and code sets specified in the National Council for Prescription Drug Programs Structured and Codified Sig Format to represent ambulatory electronic prescriptions. We parsed the Sig strings from a sample of 20,161 de-identified ambulatory e-prescriptions into variables representing the fields of the Structured and Codified Sig Format. A stratified random sample of these representations was then reviewed by a group of experts. For codified Sig fields, we attempted to map the actual words used by prescribers to the equivalent terms in the designated terminology. Proportion of prescriptions that the Format could fully represent; proportion of terms used that could be mapped to the designated terminology. The fields defined in the Format could fully represent 95% of Sigs (95% CI 93% to 97%), but ambiguities were identified, particularly in representing multiple-step instructions. The terms used by prescribers could be codified for only 60% of dose delivery methods, 84% of dose forms, 82% of vehicles, 95% of routes, 70% of sites, 33% of administration timings, and 93% of indications. The findings are based on a retrospective sample of ambulatory prescriptions derived mostly from primary care physicians. The fields defined in the Format could represent most of the patient instructions in a large prescription sample, but prior to its mandatory adoption, further work is needed to ensure that potential ambiguities are addressed and that a complete set of terms is available for the codified fields.
Salminen, Paulina; Helmiö, Mika; Ovaska, Jari; Juuti, Anne; Leivonen, Marja; Peromaa-Haavisto, Pipsa; Hurme, Saija; Soinio, Minna; Nuutila, Pirjo; Victorzon, Mikael
2018-01-16
Laparoscopic sleeve gastrectomy for treatment of morbid obesity has increased substantially despite the lack of long-term results compared with laparoscopic Roux-en-Y gastric bypass. To determine whether laparoscopic sleeve gastrectomy and laparoscopic Roux-en-Y gastric bypass are equivalent for weight loss at 5 years in patients with morbid obesity. The Sleeve vs Bypass (SLEEVEPASS) multicenter, multisurgeon, open-label, randomized clinical equivalence trial was conducted from March 2008 until June 2010 in Finland. The trial enrolled 240 morbidly obese patients aged 18 to 60 years, who were randomly assigned to sleeve gastrectomy or gastric bypass with a 5-year follow-up period (last follow-up, October 14, 2015). Laparoscopic sleeve gastrectomy (n = 121) or laparoscopic Roux-en-Y gastric bypass (n = 119). The primary end point was weight loss evaluated by percentage excess weight loss. Prespecified equivalence margins for the clinical significance of weight loss differences between gastric bypass and sleeve gastrectomy were -9% to +9% excess weight loss. Secondary end points included resolution of comorbidities, improvement of quality of life (QOL), all adverse events (overall morbidity), and mortality. Among 240 patients randomized (mean age, 48 [SD, 9] years; mean baseline body mass index, 45.9, [SD, 6.0]; 69.6% women), 80.4% completed the 5-year follow-up. At baseline, 42.1% had type 2 diabetes, 34.6% dyslipidemia, and 70.8% hypertension. The estimated mean percentage excess weight loss at 5 years was 49% (95% CI, 45%-52%) after sleeve gastrectomy and 57% (95% CI, 53%-61%) after gastric bypass (difference, 8.2 percentage units [95% CI, 3.2%-13.2%], higher in the gastric bypass group) and did not meet criteria for equivalence. Complete or partial remission of type 2 diabetes was seen in 37% (n = 15/41) after sleeve gastrectomy and in 45% (n = 18/40) after gastric bypass (P > .99). Medication for dyslipidemia was discontinued in 47% (n = 14/30) after sleeve gastrectomy and 60% (n = 24/40) after gastric bypass (P = .15) and for hypertension in 29% (n = 20/68) and 51% (n = 37/73) (P = .02), respectively. There was no statistically significant difference in QOL between groups (P = .85) and no treatment-related mortality. At 5 years the overall morbidity rate was 19% (n = 23) for sleeve gastrectomy and 26% (n = 31) for gastric bypass (P = .19). Among patients with morbid obesity, use of laparoscopic sleeve gastrectomy compared with use of laparoscopic Roux-en-Y gastric bypass did not meet criteria for equivalence in terms of percentage excess weight loss at 5 years. Although gastric bypass compared with sleeve gastrectomy was associated with greater percentage excess weight loss at 5 years, the difference was not statistically significant, based on the prespecified equivalence margins. clinicaltrials.gov Identifier: NCT00793143.
NASA Astrophysics Data System (ADS)
Bakhtiar, Nurizatul Syarfinas Ahmad; Abdullah, Farah Aini; Hasan, Yahya Abu
2017-08-01
In this paper, we consider the dynamical behaviour of the random field on the pulsating and snaking solitons in a dissipative systems described by the one-dimensional cubic-quintic complex Ginzburg-Landau equation (cqCGLE). The dynamical behaviour of the random filed was simulated by adding a random field to the initial pulse. Then, we solve it numerically by fixing the initial amplitude profile for the pulsating and snaking solitons without losing any generality. In order to create the random field, we choose 0 ≤ ɛ ≤ 1.0. As a result, multiple soliton trains are formed when the random field is applied to a pulse like initial profile for the parameters of the pulsating and snaking solitons. The results also show the effects of varying the random field of the transient energy peaks in pulsating and snaking solitons.
Bateli, Maria; Ben Rahal, Ghada; Christmann, Marin; Vach, Kirstin; Kohal, Ralf-Joachim
2018-01-01
Objective To test whether or not the modified design of the test implant (intended to increase primary stability) has an equivalent effect on MBL compared to the control. Methods Forty patients were randomly assigned to receive test or control implants to be installed in identically dimensioned bony beds. Implants were radiographically monitored at installation, at prosthetic delivery, and after one year. Treatments were considered equivalent if the 90% confidence interval (CI) for the mean difference (MD) in MBL was in between −0.25 and 0.25 mm. Additionally, several soft tissue parameters and patient-reported outcome measures (PROMs) were evaluated. Linear mixed models were fitted for each patient to assess time effects on response variables. Results Thirty-three patients (21 males, 12 females; 58.2 ± 15.2 years old) with 81 implants (47 test, 34 control) were available for analysis after a mean observation period of 13.9 ± 4.5 months (3 dropouts, 3 missed appointments, and 1 missing file). The adjusted MD in MBL after one year was −0.13 mm (90% CI: −0.46–0.19; test group: −0.49; control group: −0.36; p = 0.507). Conclusion Both implant systems can be considered successful after one year of observation. Concerning MBL in the presented setup, equivalence of the treatments cannot be concluded. Registration This trial is registered with the German Clinical Trials Register (ID: DRKS00007877). PMID:29610765
Jurčišinová, E; Jurčišin, M
2017-05-01
The influence of the uniaxial small-scale anisotropy on the kinematic magnetohydrodynamic turbulence is investigated by using the field theoretic renormalization group technique in the one-loop approximation of a perturbation theory. The infrared stable fixed point of the renormalization group equations, which drives the scaling properties of the model in the inertial range, is investigated as the function of the anisotropy parameters and it is shown that, at least at the one-loop level of approximation, the diffusion processes of the weak passive magnetic field in the anisotropically driven kinematic magnetohydrodynamic turbulence are completely equivalent to the corresponding diffusion processes of passively advected scalar fields in the anisotropic Navier-Stokes turbulent environments.
How can the neutrino interact with the electromagnetic field?
NASA Astrophysics Data System (ADS)
Novello, M.; Ducap, C. E. L.
2018-01-01
Maxwell electrodynamics in the fixed Minkowski space-time background can be described in an equivalent way in a curved Riemannian geometry that depends on the electromagnetic field and that we call the electromagnetic metric (e-metric for short). After showing such geometric equivalence we investigate the possibility that new processes dependent on the e-metric are allowed. In particular, for very high values of the field, a direct coupling of uncharged particles to the electromagnetic field may appear. Supported by Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq), FAPERJ (Fundação do Amparo Pesquisa do Rio de Janeiro, FINEP (Financiadora de Estudos e Projetos) and Coordenação do Aperfeiçoamento do Pessoal do Ensino Superior (CAPES)
Frequency of RNA–RNA interaction in a model of the RNA World
STRIGGLES, JOHN C.; MARTIN, MATTHEW B.; SCHMIDT, FRANCIS J.
2006-01-01
The RNA World model for prebiotic evolution posits the selection of catalytic/template RNAs from random populations. The mechanisms by which these random populations could be generated de novo are unclear. Non-enzymatic and RNA-catalyzed nucleic acid polymerizations are poorly processive, which means that the resulting short-chain RNA population could contain only limited diversity. Nonreciprocal recombination of smaller RNAs provides an alternative mechanism for the assembly of larger species with concomitantly greater structural diversity; however, the frequency of any specific recombination event in a random RNA population is limited by the low probability of an encounter between any two given molecules. This low probability could be overcome if the molecules capable of productive recombination were redundant, with many nonhomologous but functionally equivalent RNAs being present in a random population. Here we report fluctuation experiments to estimate the redundancy of the set of RNAs in a population of random sequences that are capable of non-Watson-Crick interaction with another RNA. Parallel SELEX experiments showed that at least one in 106 random 20-mers binds to the P5.1 stem–loop of Bacillus subtilis RNase P RNA with affinities equal to that of its naturally occurring partner. This high frequency predicts that a single RNA in an RNA World would encounter multiple interacting RNAs within its lifetime, supporting recombination as a plausible mechanism for prebiotic RNA evolution. The large number of equivalent species implies that the selection of any single interacting species in the RNA World would be a contingent event, i.e., one resulting from historical accident. PMID:16495233
Dong, Nianbo; Lipsey, Mark W
2017-01-01
It is unclear whether propensity score analysis (PSA) based on pretest and demographic covariates will meet the ignorability assumption for replicating the results of randomized experiments. This study applies within-study comparisons to assess whether pre-Kindergarten (pre-K) treatment effects on achievement outcomes estimated using PSA based on a pretest and demographic covariates can approximate those found in a randomized experiment. Data-Four studies with samples of pre-K children each provided data on two math achievement outcome measures with baseline pretests and child demographic variables that included race, gender, age, language spoken at home, and mother's highest education. Research Design and Data Analysis-A randomized study of a pre-K math curriculum provided benchmark estimates of effects on achievement measures. Comparison samples from other pre-K studies were then substituted for the original randomized control and the effects were reestimated using PSA. The correspondence was evaluated using multiple criteria. The effect estimates using PSA were in the same direction as the benchmark estimates, had similar but not identical statistical significance, and did not differ from the benchmarks at statistically significant levels. However, the magnitude of the effect sizes differed and displayed both absolute and relative bias larger than required to show statistical equivalence with formal tests, but those results were not definitive because of the limited statistical power. We conclude that treatment effect estimates based on a single pretest and demographic covariates in PSA correspond to those from a randomized experiment on the most general criteria for equivalence.
NASA Technical Reports Server (NTRS)
Greene, G. C.; Keafer, L. S., Jr.; Marple, C. G.; Foughner, J. T., Jr.
1972-01-01
Results are presented from a wind-tunnel investigation of the flow field around a 0.45-scale model of a Mars lander. The tests were conducted in air at values of Reynolds number equivalent to those anticipated on Mars. The effects of Reynolds number equivalent to those anticipated on Mars. The effects of Reynolds number, model orientation with respect to the airstream, and the position of a dish-type antenna on the flow field were determined. An appendix is included which describes the calibration and operational characteristics of hot-film anemometers under simulated Mars surface conditions.
Howell, Rebecca M; Burgett, Eric A; Isaacs, Daniel; Price Hedrick, Samantha G; Reilly, Michael P; Rankine, Leith J; Grantham, Kevin K; Perkins, Stephanie; Klein, Eric E
2016-05-01
To measure, in the setting of typical passively scattered proton craniospinal irradiation (CSI) treatment, the secondary neutron spectra, and use these spectra to calculate dose equivalents for both internal and external neutrons delivered via a Mevion single-room compact proton system. Secondary neutron spectra were measured using extended-range Bonner spheres for whole brain, upper spine, and lower spine proton fields. The detector used can discriminate neutrons over the entire range of the energy spectrum encountered in proton therapy. To separately assess internally and externally generated neutrons, each of the fields was delivered with and without a phantom. Average neutron energy, total neutron fluence, and ambient dose equivalent [H* (10)] were calculated for each spectrum. Neutron dose equivalents as a function of depth were estimated by applying published neutron depth-dose data to in-air H* (10) values. For CSI fields, neutron spectra were similar, with a high-energy direct neutron peak, an evaporation peak, a thermal peak, and an intermediate continuum between the evaporation and thermal peaks. Neutrons in the evaporation peak made the largest contribution to dose equivalent. Internal neutrons had a very low to negligible contribution to dose equivalent compared with external neutrons, largely attributed to the measurement location being far outside the primary proton beam. Average energies ranged from 8.6 to 14.5 MeV, whereas fluences ranged from 6.91 × 10(6) to 1.04 × 10(7) n/cm(2)/Gy, and H* (10) ranged from 2.27 to 3.92 mSv/Gy. For CSI treatments delivered with a Mevion single-gantry proton therapy system, we found measured neutron dose was consistent with dose equivalents reported for CSI with other proton beamlines. Copyright © 2016 The Authors. Published by Elsevier Inc. All rights reserved.
New modeling method for the dielectric relaxation of a DRAM cell capacitor
NASA Astrophysics Data System (ADS)
Choi, Sujin; Sun, Wookyung; Shin, Hyungsoon
2018-02-01
This study proposes a new method for automatically synthesizing the equivalent circuit of the dielectric relaxation (DR) characteristic in dynamic random access memory (DRAM) without frequency dependent capacitance measurement. Charge loss due to DR can be observed by a voltage drop at the storage node and this phenomenon can be analyzed by an equivalent circuit. The Havariliak-Negami model is used to accurately determine the electrical characteristic parameters of an equivalent circuit. The DRAM sensing operation is performed in HSPICE simulations to verify this new method. The simulation demonstrates that the storage node voltage drop resulting from DR and the reduction in the sensing voltage margin, which has a critical impact on DRAM read operation, can be accurately estimated using this new method.
Wang, Deli; Xu, Wei; Zhao, Xiangrong
2016-03-01
This paper aims to deal with the stationary responses of a Rayleigh viscoelastic system with zero barrier impacts under external random excitation. First, the original stochastic viscoelastic system is converted to an equivalent stochastic system without viscoelastic terms by approximately adding the equivalent stiffness and damping. Relying on the means of non-smooth transformation of state variables, the above system is replaced by a new system without an impact term. Then, the stationary probability density functions of the system are observed analytically through stochastic averaging method. By considering the effects of the biquadratic nonlinear damping coefficient and the noise intensity on the system responses, the effectiveness of the theoretical method is tested by comparing the analytical results with those generated from Monte Carlo simulations. Additionally, it does deserve attention that some system parameters can induce the occurrence of stochastic P-bifurcation.
From random microstructures to representative volume elements
NASA Astrophysics Data System (ADS)
Zeman, J.; Šejnoha, M.
2007-06-01
A unified treatment of random microstructures proposed in this contribution opens the way to efficient solutions of large-scale real world problems. The paper introduces a notion of statistically equivalent periodic unit cell (SEPUC) that replaces in a computational step the actual complex geometries on an arbitrary scale. A SEPUC is constructed such that its morphology conforms with images of real microstructures. Here, the appreciated two-point probability function and the lineal path function are employed to classify, from the statistical point of view, the geometrical arrangement of various material systems. Examples of statistically equivalent unit cells constructed for a unidirectional fibre tow, a plain weave textile composite and an irregular-coursed masonry wall are given. A specific result promoting the applicability of the SEPUC as a tool for the derivation of homogenized effective properties that are subsequently used in an independent macroscopic analysis is also presented.
Determination of Dynamic Recrystallization Process by Equivalent Strain
NASA Astrophysics Data System (ADS)
Qin, Xiaomei; Deng, Wei
Based on Tpнoвckiй's displacement field, equivalent strain expression was derived. And according to the dynamic recrystallization (DRX) critical strain, DRX process was determined by equivalent strain. It was found that equivalent strain distribution in deformed specimen is inhomogeneous, and it increases with increasing true strain. Under a certain true strain, equivalent strains at the center, demisemi radius or on tangential plane just below the surface of the specimen are higher than the true strain. Thus, micrographs at those positions can not exactly reflect the true microstructures under the certain true strain. With increasing strain rate, the initial and finish time of DRX decrease. The frozen microstructures of 20Mn23AlV steel with the experimental condition validate the feasibility of predicting DRX process by equivalent strain.
Bednarz, Bryan; Hancox, Cindy; Xu, X George
2012-01-01
There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment. PMID:19671968
NASA Astrophysics Data System (ADS)
Bednarz, Bryan; Hancox, Cindy; Xu, X. George
2009-09-01
There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU values for IMRT treatments can be two to three times greater than 3D CRT. Therefore, the total equivalent dose in most organs would be higher from the IMRT treatment compared to the box treatment and comparable to the organ doses from the box treatment plus the 6-field boost. This is the first time when organ dose data for an adult male patient of the ICRP reference anatomy have been calculated and documented. The tools presented in this paper can be used to estimate the second cancer risk to patients undergoing radiation treatment.
Dark matter and the equivalence principle
NASA Technical Reports Server (NTRS)
Frieman, Joshua A.; Gradwohl, Ben-Ami
1993-01-01
A survey is presented of the current understanding of dark matter invoked by astrophysical theory and cosmology. Einstein's equivalence principle asserts that local measurements cannot distinguish a system at rest in a gravitational field from one that is in uniform acceleration in empty space. Recent test-methods for the equivalence principle are presently discussed as bases for testing of dark matter scenarios involving the long-range forces between either baryonic or nonbaryonic dark matter and ordinary matter.
40 CFR 53.58 - Operational field precision and blank test.
Code of Federal Regulations, 2013 CFR
2013-07-01
... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...
40 CFR 53.58 - Operational field precision and blank test.
Code of Federal Regulations, 2014 CFR
2014-07-01
... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...
40 CFR 53.58 - Operational field precision and blank test.
Code of Federal Regulations, 2010 CFR
2010-07-01
... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...
40 CFR 53.58 - Operational field precision and blank test.
Code of Federal Regulations, 2012 CFR
2012-07-01
... PROGRAMS (CONTINUED) AMBIENT AIR MONITORING REFERENCE AND EQUIVALENT METHODS Procedures for Testing Physical (Design) and Performance Characteristics of Reference Methods and Class I and Class II Equivalent... samplers are also subject to a test for possible deposition of particulate matter on inactive filters...
Current Directions in Videoconferencing Tele-Mental Health Research
Richardson, Lisa K.; Frueh, B. Christopher; Grubaugh, Anouk L.; Egede, Leonard; Elhai, Jon D.
2009-01-01
The provision of mental health services via videoconferencing tele-mental health has become an increasingly routine component of mental health service delivery throughout the world. Emphasizing the research literature since 2003, we examine: 1) the extent to which the field of tele-mental health has advanced the research agenda previously suggested; and 2) implications for tele-mental health care delivery for special clinical populations. Previous findings have demonstrated that tele-mental health services are satisfactory to patients, improve outcomes, and are probably cost effective. In the very small number of randomized controlled studies that have been conducted to date, tele-mental health has demonstrated equivalent efficacy compared to face-to-face care in a variety of clinical settings and with specific patient populations. However, methodologically flawed or limited research studies are the norm, and thus the research agenda for tele-mental health has not been fully maximized. Implications for future research and practice are discussed. PMID:20161010
A high speed model-based approach for wavefront sensorless adaptive optics systems
NASA Astrophysics Data System (ADS)
Lianghua, Wen; Yang, Ping; Shuai, Wang; Wenjing, Liu; Shanqiu, Chen; Xu, Bing
2018-02-01
To improve temporal-frequency property of wavefront sensorless adaptive optics (AO) systems, a fast general model-based aberration correction algorithm is presented. The fast general model-based approach is based on the approximately linear relation between the mean square of the aberration gradients and the second moment of far-field intensity distribution. The presented model-based method is capable of completing a mode aberration effective correction just applying one disturbing onto the deformable mirror(one correction by one disturbing), which is reconstructed by the singular value decomposing the correlation matrix of the Zernike functions' gradients. Numerical simulations of AO corrections under the various random and dynamic aberrations are implemented. The simulation results indicate that the equivalent control bandwidth is 2-3 times than that of the previous method with one aberration correction after applying N times disturbing onto the deformable mirror (one correction by N disturbing).
Filamentary model in resistive switching materials
NASA Astrophysics Data System (ADS)
Jasmin, Alladin C.
2017-12-01
The need for next generation computer devices is increasing as the demand for efficient data processing increases. The amount of data generated every second also increases which requires large data storage devices. Oxide-based memory devices are being studied to explore new research frontiers thanks to modern advances in nanofabrication. Various oxide materials are studied as active layers for non-volatile memory. This technology has potential application in resistive random-access-memory (ReRAM) and can be easily integrated in CMOS technologies. The long term perspective of this research field is to develop devices which mimic how the brain processes information. To realize such application, a thorough understanding of the charge transport and switching mechanism is important. A new perspective in the multistate resistive switching based on current-induced filament dynamics will be discussed. A simple equivalent circuit of the device gives quantitative information about the nature of the conducting filament at different resistance states.
Exact solution for the time evolution of network rewiring models
NASA Astrophysics Data System (ADS)
Evans, T. S.; Plato, A. D. K.
2007-05-01
We consider the rewiring of a bipartite graph using a mixture of random and preferential attachment. The full mean-field equations for the degree distribution and its generating function are given. The exact solution of these equations for all finite parameter values at any time is found in terms of standard functions. It is demonstrated that these solutions are an excellent fit to numerical simulations of the model. We discuss the relationship between our model and several others in the literature, including examples of urn, backgammon, and balls-in-boxes models, the Watts and Strogatz rewiring problem, and some models of zero range processes. Our model is also equivalent to those used in various applications including cultural transmission, family name and gene frequencies, glasses, and wealth distributions. Finally some Voter models and an example of a minority game also show features described by our model.
NASA Astrophysics Data System (ADS)
Velev, Julian P.; Merodio, Pablo; Pollack, Cesar; Kalitsov, Alan; Chshiev, Mairbek; Kioussis, Nicholas
2017-12-01
Using model calculations, we demonstrate a very high level of control of the spin-transfer torque (STT) by electric field in multiferroic tunnel junctions with composite dielectric/ferroelectric barriers. We find that, for particular device parameters, toggling the polarization direction can switch the voltage-induced part of STT between a finite value and a value close to zero, i.e. quench and release the torque. Additionally, we demonstrate that under certain conditions the zero-voltage STT, i.e. the interlayer exchange coupling, can switch sign with polarization reversal, which is equivalent to reversing the magnetic ground state of the tunnel junction. This bias- and polarization-tunability of the STT could be exploited to engineer novel functionalities such as softening/hardening of the bit or increasing the signal-to-noise ratio in magnetic sensors, which can have important implications for magnetic random access memories or for combined memory and logic devices.
2011-01-01
Background Safety assessment of genetically modified organisms is currently often performed by comparative evaluation. However, natural variation of plant characteristics between commercial varieties is usually not considered explicitly in the statistical computations underlying the assessment. Results Statistical methods are described for the assessment of the difference between a genetically modified (GM) plant variety and a conventional non-GM counterpart, and for the assessment of the equivalence between the GM variety and a group of reference plant varieties which have a history of safe use. It is proposed to present the results of both difference and equivalence testing for all relevant plant characteristics simultaneously in one or a few graphs, as an aid for further interpretation in safety assessment. A procedure is suggested to derive equivalence limits from the observed results for the reference plant varieties using a specific implementation of the linear mixed model. Three different equivalence tests are defined to classify any result in one of four equivalence classes. The performance of the proposed methods is investigated by a simulation study, and the methods are illustrated on compositional data from a field study on maize grain. Conclusions A clear distinction of practical relevance is shown between difference and equivalence testing. The proposed tests are shown to have appropriate performance characteristics by simulation, and the proposed simultaneous graphical representation of results was found to be helpful for the interpretation of results from a practical field trial data set. PMID:21324199
Kraeutler, Matthew J; Reynolds, Kirk A; Long, Cyndi; McCarty, Eric C
2015-06-01
The purpose of this study was to compare the effect of compressive cryotherapy (CC) vs. ice on postoperative pain in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. A commercial device was used for postoperative CC. A standard ice wrap (IW) was used for postoperative cryotherapy alone. Patients scheduled for rotator cuff repair or subacromial decompression were consented and randomized to 1 of 2 groups; patients were randomized to use either CC or a standard IW for the first postoperative week. All patients were asked to complete a "diary" each day, which included visual analog scale scores based on average daily pain and worst daily pain as well as total pain medication usage. Pain medications were then converted to a morphine equivalent dosage. Forty-six patients completed the study and were available for analysis; 25 patients were randomized to CC and 21 patients were randomized to standard IW. No significant differences were found in average pain, worst pain, or morphine equivalent dosage on any day. There does not appear to be a significant benefit to use of CC over standard IW in patients undergoing shoulder arthroscopy for rotator cuff repair or subacromial decompression. Further study is needed to determine if CC devices are a cost-effective option for postoperative pain management in this population of patients. Copyright © 2015 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.
Process for magnetic beneficiating petroleum cracking catalyst
Doctor, R.D.
1993-10-05
A process is described for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded. 1 figures.
Process for magnetic beneficiating petroleum cracking catalyst
Doctor, Richard D.
1993-01-01
A process for beneficiating a particulate zeolite petroleum cracking catalyst having metal values in excess of 1000 ppm nickel equivalents. The particulate catalyst is passed through a magnetic field in the range of from about 2 Tesla to about 5 Tesla generated by a superconducting quadrupole open-gradient magnetic system for a time sufficient to effect separation of said catalyst into a plurality of zones having different nickel equivalent concentrations. A first zone has nickel equivalents of about 6,000 ppm and greater, a second zone has nickel equivalents in the range of from about 2000 ppm to about 6000 ppm, and a third zone has nickel equivalents of about 2000 ppm and less. The zones of catalyst are separated and the second zone material is recycled to a fluidized bed of zeolite petroleum cracking catalyst. The low nickel equivalent zone is treated while the high nickel equivalent zone is discarded.
ERIC Educational Resources Information Center
Livingston, Samuel A.; Kim, Sooyeon
2010-01-01
A series of resampling studies investigated the accuracy of equating by four different methods in a random groups equating design with samples of 400, 200, 100, and 50 test takers taking each form. Six pairs of forms were constructed. Each pair was constructed by assigning items from an existing test taken by 9,000 or more test takers. The…
Compactly supported linearised observables in single-field inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fröob, Markus B.; Higuchi, Atsushi; Hack, Thomas-Paul, E-mail: mbf503@york.ac.uk, E-mail: thomas-paul.hack@itp.uni-leipzig.de, E-mail: atsushi.higuchi@york.ac.uk
We investigate the gauge-invariant observables constructed by smearing the graviton and inflaton fields by compactly supported tensors at linear order in general single-field inflation. These observables correspond to gauge-invariant quantities that can be measured locally. In particular, we show that these observables are equivalent to (smeared) local gauge-invariant observables such as the linearised Weyl tensor, which have better infrared properties than the graviton and inflaton fields. Special cases include the equivalence between the compactly supported gauge-invariant graviton observable and the smeared linearised Weyl tensor in Minkowski and de Sitter spaces. Our results indicate that the infrared divergences in the tensormore » and scalar perturbations in single-field inflation have the same status as in de Sitter space and are both a gauge artefact, in a certain technical sense, at tree level.« less
Kouritzin, Michael A; Newton, Fraser; Wu, Biao
2013-04-01
Herein, we propose generating CAPTCHAs through random field simulation and give a novel, effective and efficient algorithm to do so. Indeed, we demonstrate that sufficient information about word tests for easy human recognition is contained in the site marginal probabilities and the site-to-nearby-site covariances and that these quantities can be embedded directly into certain conditional probabilities, designed for effective simulation. The CAPTCHAs are then partial random realizations of the random CAPTCHA word. We start with an initial random field (e.g., randomly scattered letter pieces) and use Gibbs resampling to re-simulate portions of the field repeatedly using these conditional probabilities until the word becomes human-readable. The residual randomness from the initial random field together with the random implementation of the CAPTCHA word provide significant resistance to attack. This results in a CAPTCHA, which is unrecognizable to modern optical character recognition but is recognized about 95% of the time in a human readability study.
2013-01-01
Purpose Nondegradable steel-and titanium-based implants are commonly used in orthopedic surgery. Although they provide maximal stability, they are also associated with interference on imaging modalities, may induce stress shielding, and additional explantation procedures may be necessary. Alternatively, degradable polymer implants are mechanically weaker and induce foreign body reactions. Degradable magnesium-based stents are currently being investigated in clinical trials for use in cardiovascular medicine. The magnesium alloy MgYREZr demonstrates good biocompatibility and osteoconductive properties. The aim of this prospective, randomized, clinical pilot trial was to determine if magnesium-based MgYREZr screws are equivalent to standard titanium screws for fixation during chevron osteotomy in patients with a mild hallux valgus. Methods Patients (n=26) were randomly assigned to undergo osteosynthesis using either titanium or degradable magnesium-based implants of the same design. The 6 month follow-up period included clinical, laboratory, and radiographic assessments. Results No significant differences were found in terms of the American Orthopaedic Foot and Ankle Society (AOFAS) score for hallux, visual analog scale for pain assessment, or range of motion (ROM) of the first metatarsophalangeal joint (MTPJ). No foreign body reactions, osteolysis, or systemic inflammatory reactions were detected. The groups were not significantly different in terms of radiographic or laboratory results. Conclusion The radiographic and clinical results of this prospective controlled study demonstrate that degradable magnesium-based screws are equivalent to titanium screws for the treatment of mild hallux valgus deformities. PMID:23819489
Equivalent source modeling of the main field using MAGSAT data
NASA Technical Reports Server (NTRS)
1980-01-01
The software was considerably enhanced to accommodate a more comprehensive examination of data available for field modeling using the equivalent sources method by (1) implementing a dynamic core allocation capability into the software system for the automatic dimensioning of the normal matrix; (2) implementing a time dependent model for the dipoles; (3) incorporating the capability to input specialized data formats in a fashion similar to models in spherical harmonics; and (4) implementing the optional ability to simultaneously estimate observatory anomaly biases where annual means data is utilized. The time dependence capability was demonstrated by estimating a component model of 21 deg resolution using the 14 day MAGSAT data set of Goddard's MGST (12/80). The equivalent source model reproduced both the constant and the secular variation found in MGST (12/80).
Equivalent theories redefine Hamiltonian observables to exhibit change in general relativity
NASA Astrophysics Data System (ADS)
Pitts, J. Brian
2017-03-01
Change and local spatial variation are missing in canonical General Relativity’s observables as usually defined, an aspect of the problem of time. Definitions can be tested using equivalent formulations of a theory, non-gauge and gauge, because they must have equivalent observables and everything is observable in the non-gauge formulation. Taking an observable from the non-gauge formulation and finding the equivalent in the gauge formulation, one requires that the equivalent be an observable, thus constraining definitions. For massive photons, the de Broglie-Proca non-gauge formulation observable {{A}μ} is equivalent to the Stueckelberg-Utiyama gauge formulation quantity {{A}μ}+{{\\partial}μ}φ, which must therefore be an observable. To achieve that result, observables must have 0 Poisson bracket not with each first-class constraint, but with the Rosenfeld-Anderson-Bergmann-Castellani gauge generator G, a tuned sum of first-class constraints, in accord with the Pons-Salisbury-Sundermeyer definition of observables. The definition for external gauge symmetries can be tested using massive gravity, where one can install gauge freedom by parametrization with clock fields X A . The non-gauge observable {{g}μ ν} has the gauge equivalent {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν}. The Poisson bracket of {{X}A}{{,}μ}{{g}μ ν}{{X}B}{{,}ν} with G turns out to be not 0 but a Lie derivative. This non-zero Poisson bracket refines and systematizes Kuchař’s proposal to relax the 0 Poisson bracket condition with the Hamiltonian constraint. Thus observables need covariance, not invariance, in relation to external gauge symmetries. The Lagrangian and Hamiltonian for massive gravity are those of General Relativity + Λ + 4 scalars, so the same definition of observables applies to General Relativity. Local fields such as {{g}μ ν} are observables. Thus observables change. Requiring equivalent observables for equivalent theories also recovers Hamiltonian-Lagrangian equivalence.
Sound field reproduction as an equivalent acoustical scattering problem.
Fazi, Filippo Maria; Nelson, Philip A
2013-11-01
Given a continuous distribution of acoustic sources, the determination of the source strength that ensures the synthesis of a desired sound field is shown to be identical to the solution of an equivalent acoustic scattering problem. The paper begins with the presentation of the general theory that underpins sound field reproduction with secondary sources continuously arranged on the boundary of the reproduction region. The process of reproduction by a continuous source distribution is modeled by means of an integral operator (the single layer potential). It is then shown how the solution of the sound reproduction problem corresponds to that of an equivalent scattering problem. Analytical solutions are computed for two specific instances of this problem, involving, respectively, the use of a secondary source distribution in spherical and planar geometries. The results are shown to be the same as those obtained with analyses based on High Order Ambisonics and Wave Field Synthesis, respectively, thus bringing to light a fundamental analogy between these two methods of sound reproduction. Finally, it is shown how the physical optics (Kirchhoff) approximation enables the derivation of a high-frequency simplification for the problem under consideration, this in turn being related to the secondary source selection criterion reported in the literature on Wave Field Synthesis.
A unifying framework for marginalized random intercept models of correlated binary outcomes
Swihart, Bruce J.; Caffo, Brian S.; Crainiceanu, Ciprian M.
2013-01-01
We demonstrate that many current approaches for marginal modeling of correlated binary outcomes produce likelihoods that are equivalent to the copula-based models herein. These general copula models of underlying latent threshold random variables yield likelihood-based models for marginal fixed effects estimation and interpretation in the analysis of correlated binary data with exchangeable correlation structures. Moreover, we propose a nomenclature and set of model relationships that substantially elucidates the complex area of marginalized random intercept models for binary data. A diverse collection of didactic mathematical and numerical examples are given to illustrate concepts. PMID:25342871
The blocked-random effect in pictures and words.
Toglia, M P; Hinman, P J; Dayton, B S; Catalano, J F
1997-06-01
Picture and word recall was examined in conjunction with list organization. 60 subjects studied a list of 30 items, either words or their pictorial equivalents. The 30 words/pictures, members of five conceptual categories, each represented by six exemplars, were presented either blocked by category or in a random order. While pictures were recalled better than words and a standard blocked-random effect was observed, the interaction indicated that the recall advantage of a blocked presentation was restricted to the word lists. A similar pattern emerged for clustering. These findings are discussed in terms of limitations upon the pictorial superiority effect.
Simulation of random road microprofile based on specified correlation function
NASA Astrophysics Data System (ADS)
Rykov, S. P.; Rykova, O. A.; Koval, V. S.; Vlasov, V. G.; Fedotov, K. V.
2018-03-01
The paper aims to develop a numerical simulation method and an algorithm for a random microprofile of special roads based on the specified correlation function. The paper used methods of correlation, spectrum and numerical analysis. It proves that the transfer function of the generating filter for known expressions of spectrum input and output filter characteristics can be calculated using a theorem on nonnegative and fractional rational factorization and integral transformation. The model of the random function equivalent of the real road surface microprofile enables us to assess springing system parameters and identify ranges of variations.
Equivalent equations of motion for gravity and entropy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Equivalent equations of motion for gravity and entropy
Czech, Bartlomiej; Lamprou, Lampros; McCandlish, Samuel; ...
2017-02-01
We demonstrate an equivalence between the wave equation obeyed by the entanglement entropy of CFT subregions and the linearized bulk Einstein equation in Anti-de Sitter space. In doing so, we make use of the formalism of kinematic space and fields on this space. We show that the gravitational dynamics are equivalent to a gauge invariant wave-equation on kinematic space and that this equation arises in natural correspondence to the conformal Casimir equation in the CFT.
Near Identifiability of Dynamical Systems
NASA Technical Reports Server (NTRS)
Hadaegh, F. Y.; Bekey, G. A.
1987-01-01
Concepts regarding approximate mathematical models treated rigorously. Paper presents new results in analysis of structural identifiability, equivalence, and near equivalence between mathematical models and physical processes they represent. Helps establish rigorous mathematical basis for concepts related to structural identifiability and equivalence revealing fundamental requirements, tacit assumptions, and sources of error. "Structural identifiability," as used by workers in this field, loosely translates as meaning ability to specify unique mathematical model and set of model parameters that accurately predict behavior of corresponding physical system.
General invertible transformation and physical degrees of freedom
NASA Astrophysics Data System (ADS)
Takahashi, Kazufumi; Motohashi, Hayato; Suyama, Teruaki; Kobayashi, Tsutomu
2017-04-01
An invertible field transformation is such that the old field variables correspond one-to-one to the new variables. As such, one may think that two systems that are related by an invertible transformation are physically equivalent. However, if the transformation depends on field derivatives, the equivalence between the two systems is nontrivial due to the appearance of higher derivative terms in the equations of motion. To address this problem, we prove the following theorem on the relation between an invertible transformation and Euler-Lagrange equations: If the field transformation is invertible, then any solution of the original set of Euler-Lagrange equations is mapped to a solution of the new set of Euler-Lagrange equations, and vice versa. We also present applications of the theorem to scalar-tensor theories.
NASA Astrophysics Data System (ADS)
Gabai, Haniel; Baranes-Zeevi, Maya; Zilberman, Meital; Shaked, Natan T.
2013-04-01
We propose an off-axis interferometric imaging system as a simple and unique modality for continuous, non-contact and non-invasive wide-field imaging and characterization of drug release from its polymeric device used in biomedicine. In contrast to the current gold-standard methods in this field, usually based on chromatographic and spectroscopic techniques, our method requires no user intervention during the experiment, and only one test-tube is prepared. We experimentally demonstrate imaging and characterization of drug release from soy-based protein matrix, used as skin equivalent for wound dressing with controlled anesthetic, Bupivacaine drug release. Our preliminary results demonstrate the high potential of our method as a simple and low-cost modality for wide-field imaging and characterization of drug release from drug delivery devices.
Rollet, S; Autischer, M; Beck, P; Latocha, M
2007-01-01
The response of a tissue equivalent proportional counter (TEPC) in a mixed radiation field with a neutron energy distribution similar to the radiation field at commercial flight altitudes has been studied. The measurements have been done at the CERN-EU High-Energy Reference Field (CERF) facility where a well-characterised radiation field is available for intercomparison. The TEPC instrument used by the ARC Seibersdorf Research is filled with pure propane gas at low pressure and can be used to determine the lineal energy distribution of the energy deposition in a mass of gas equivalent to a 2 microm diameter volume of unit density tissue, of similar size to the nuclei of biological cells. The linearity of the detector response was checked both in term of dose and dose rate. The effect of dead-time has been corrected. The influence of the detector exposure location and orientation in the radiation field on the dose distribution was also studied as a function of the total dose. The microdosimetric distribution of the absorbed dose as a function of the lineal energy has been obtained and compared with the same distribution simulated with the FLUKA Monte Carlo transport code. The dose equivalent was calculated by folding this distribution with the quality factor as a function of linear energy transfer. The comparison between the measured and simulated distributions show that they are in good agreement. As a result of this study the detector is well characterised, thanks also to the numerical simulations the instrument response is well understood, and it's currently being used onboard the aircrafts to evaluate the dose to aircraft crew caused by cosmic radiation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rosnitskiy, P., E-mail: pavrosni@yandex.ru; Yuldashev, P., E-mail: petr@acs366.phys.msu.ru; Khokhlova, V., E-mail: vera@acs366.phys.msu.ru
2015-10-28
An equivalent source model was proposed as a boundary condition to the nonlinear parabolic Khokhlov-Zabolotskaya (KZ) equation to simulate high intensity focused ultrasound (HIFU) fields generated by medical ultrasound transducers with the shape of a spherical shell. The boundary condition was set in the initial plane; the aperture, the focal distance, and the initial pressure of the source were chosen based on the best match of the axial pressure amplitude and phase distributions in the Rayleigh integral analytic solution for a spherical transducer and the linear parabolic approximation solution for the equivalent source. Analytic expressions for the equivalent source parametersmore » were derived. It was shown that the proposed approach allowed us to transfer the boundary condition from the spherical surface to the plane and to achieve a very good match between the linear field solutions of the parabolic and full diffraction models even for highly focused sources with F-number less than unity. The proposed method can be further used to expand the capabilities of the KZ nonlinear parabolic equation for efficient modeling of HIFU fields generated by strongly focused sources.« less
NASA Astrophysics Data System (ADS)
Cao, Xiangyu; Le Doussal, Pierre; Rosso, Alberto; Santachiara, Raoul
2018-04-01
We study transitions in log-correlated random energy models (logREMs) that are related to the violation of a Seiberg bound in Liouville field theory (LFT): the binding transition and the termination point transition (a.k.a., pre-freezing). By means of LFT-logREM mapping, replica symmetry breaking and traveling-wave equation techniques, we unify both transitions in a two-parameter diagram, which describes the free-energy large deviations of logREMs with a deterministic background log potential, or equivalently, the joint moments of the free energy and Gibbs measure in logREMs without background potential. Under the LFT-logREM mapping, the transitions correspond to the competition of discrete and continuous terms in a four-point correlation function. Our results provide a statistical interpretation of a peculiar nonlocality of the operator product expansion in LFT. The results are rederived by a traveling-wave equation calculation, which shows that the features of LFT responsible for the transitions are reproduced in a simple model of diffusion with absorption. We examine also the problem by a replica symmetry breaking analysis. It complements the previous methods and reveals a rich large deviation structure of the free energy of logREMs with a deterministic background log potential. Many results are verified in the integrable circular logREM, by a replica-Coulomb gas integral approach. The related problem of common length (overlap) distribution is also considered. We provide a traveling-wave equation derivation of the LFT predictions announced in a precedent work.
Jang, J-Y; Chang, Y R; Kim, S-W; Choi, S H; Park, S J; Lee, S E; Lim, C-S; Kang, M J; Lee, H; Heo, J S
2016-05-01
There is no consensus on the best method of preventing postoperative pancreatic fistula (POPF) after pancreaticoduodenectomy (PD). This multicentre, parallel group, randomized equivalence trial investigated the effect of two ways of pancreatic stenting after PD on the rate of POPF. Patients undergoing elective PD or pylorus-preserving PD with duct-to-mucosa pancreaticojejunostomy were enrolled from four tertiary referral hospitals. Randomization was stratified according to surgeon with a 1 : 1 allocation ratio to avoid any related technical factors. The primary endpoint was clinically relevant POPF rate. Secondary endpoints were nutritional index, remnant pancreatic volume, long-term complications and quality of life 2 years after PD. A total of 328 patients were randomized to the external (164 patients) or internal (164) stent group between August 2010 and January 2014. The rates of clinically relevant POPF were 24·4 per cent in the external and 18·9 per cent in the internal stent group (risk difference 5·5 per cent). As the 90 per cent confidence interval (-2·0 to 13·0 per cent) did not fall within the predefined equivalence limits (-10 to 10 per cent), the clinically relevant POPF rates in the two groups were not equivalent. Similar results were observed for patients with soft pancreatic texture and high fistula risk score. Other postoperative outcomes were comparable between the two groups. Five stent-related complications occurred in the external stent group. Multivariable analysis revealed that soft pancreatic texture, non-pancreatic disease and high body mass index (23·3 kg/m 2 or above) predicted clinically relevant POPF. External stenting after PD was associated with a higher rate of clinically relevant POPF than internal stenting. Registration number: NCT01023594 (https://www.clinicaltrials.gov). © 2016 BJS Society Ltd Published by John Wiley & Sons Ltd.
2017-05-19
Vijay Singh, Martin Tchernookov, Rebecca Butterfield, Ilya Nemenman, Rongrong Ji. Director Field Model of the Primary Visual Cortex for Contour...FTE Equivalent: Total Number: DISCIPLINE Vijay Singh 40 Physics 0.40 1 PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Martin Tchernookov 0.20
78 FR 57470 - Special Conditions: Eclipse, EA500, Certification of Autothrottle Functions
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-19
... Engine Control System 23-112A-SC for High Intensity Radiated Fields (HIRF) Protection Equivalent Levels... transient. (e) Under rare normal and non-normal conditions, disengagement of any automatic control function... standards that the Administrator considers necessary to establish a level of safety equivalent to that...
Kernel-Correlated Levy Field Driven Forward Rate and Application to Derivative Pricing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bo Lijun; Wang Yongjin; Yang Xuewei, E-mail: xwyangnk@yahoo.com.cn
2013-08-01
We propose a term structure of forward rates driven by a kernel-correlated Levy random field under the HJM framework. The kernel-correlated Levy random field is composed of a kernel-correlated Gaussian random field and a centered Poisson random measure. We shall give a criterion to preclude arbitrage under the risk-neutral pricing measure. As applications, an interest rate derivative with general payoff functional is priced under this pricing measure.
Kokeny, Paul; Cheng, Yu-Chung N; Xie, He
2018-05-01
Modeling MRI signal behaviors in the presence of discrete magnetic particles is important, as magnetic particles appear in nanoparticle labeled cells, contrast agents, and other biological forms of iron. Currently, many models that take into account the discrete particle nature in a system have been used to predict magnitude signal decays in the form of R2* or R2' from one single voxel. Little work has been done for predicting phase signals. In addition, most calculations of phase signals rely on the assumption that a system containing discrete particles behaves as a continuous medium. In this work, numerical simulations are used to investigate MRI magnitude and phase signals from discrete particles, without diffusion effects. Factors such as particle size, number density, susceptibility, volume fraction, particle arrangements for their randomness, and field of view have been considered in simulations. The results are compared to either a ground truth model, theoretical work based on continuous mediums, or previous literature. Suitable parameters used to model particles in several voxels that lead to acceptable magnetic field distributions around particle surfaces and accurate MR signals are identified. The phase values as a function of echo time from a central voxel filled by particles can be significantly different from those of a continuous cubic medium. However, a completely random distribution of particles can lead to an R2' value which agrees with the prediction from the static dephasing theory. A sphere with a radius of at least 4 grid points used in simulations is found to be acceptable to generate MR signals equivalent from a larger sphere. Increasing number of particles with a fixed volume fraction in simulations reduces the resulting variance in the phase behavior, and converges to almost the same phase value for different particle numbers at each echo time. The variance of phase values is also reduced when increasing the number of particles in a fixed voxel. These results indicate that MRI signals from voxels containing discrete particles, even with a sufficient number of particles per voxel, cannot be properly modeled by a continuous medium with an equivalent susceptibility value in the voxel. Copyright © 2017 Elsevier Inc. All rights reserved.
Is the Non-Dipole Magnetic Field Random?
NASA Technical Reports Server (NTRS)
Walker, Andrew D.; Backus, George E.
1996-01-01
Statistical modelling of the Earth's magnetic field B has a long history. In particular, the spherical harmonic coefficients of scalar fields derived from B can be treated as Gaussian random variables. In this paper, we give examples of highly organized fields whose spherical harmonic coefficients pass tests for independent Gaussian random variables. The fact that coefficients at some depth may be usefully summarized as independent samples from a normal distribution need not imply that there really is some physical, random process at that depth. In fact, the field can be extremely structured and still be regarded for some purposes as random. In this paper, we examined the radial magnetic field B(sub r) produced by the core, but the results apply to any scalar field on the core-mantle boundary (CMB) which determines B outside the CMB.
Gallagher, Anthony G; Seymour, Neal E; Jordan-Black, Julie-Anne; Bunting, Brendan P; McGlade, Kieran; Satava, Richard Martin
2013-06-01
We assessed the effectiveness of ToT from VR laparoscopic simulation training in 2 studies. In a second study, we also assessed the TER. ToT is a detectable performance improvement between equivalent groups, and TER is the observed percentage performance differences between 2 matched groups carrying out the same task but with 1 group pretrained on VR simulation. Concordance between simulated and in-vivo procedure performance was also assessed. Prospective, randomized, and blinded. In Study 1, experienced laparoscopic surgeons (n = 195) and in Study 2 laparoscopic novices (n = 30) were randomized to either train on VR simulation before completing an equivalent real-world task or complete the real-world task only. Experienced laparoscopic surgeons and novices who trained on the simulator performed significantly better than their controls, thus demonstrating ToT. Their performance showed a TER between 7% and 42% from the virtual to the real tasks. Simulation training impacted most on procedural error reduction in both studies (32-42%). The correlation observed between the VR and real-world task performance was r > 0·96 (Study 2). VR simulation training offers a powerful and effective platform for training safer skills.
Damuth, John
2007-05-01
Across a wide array of animal species, mean population densities decline with species body mass such that the rate of energy use of local populations is approximately independent of body size. This "energetic equivalence" is particularly evident when ecological population densities are plotted across several or more orders of magnitude in body mass and is supported by a considerable body of evidence. Nevertheless, interpretation of the data has remained controversial, largely because of the difficulty of explaining the origin and maintenance of such a size-abundance relationship in terms of purely ecological processes. Here I describe results of a simulation model suggesting that an extremely simple mechanism operating over evolutionary time can explain the major features of the empirical data. The model specifies only the size scaling of metabolism and a process where randomly chosen species evolve to take resource energy from other species. This process of energy exchange among particular species is distinct from a random walk of species abundances and creates a situation in which species populations using relatively low amounts of energy at any body size have an elevated extinction risk. Selective extinction of such species rapidly drives size-abundance allometry in faunas toward approximate energetic equivalence and maintains it there.
Guo, Qingqian; Chen, Ruipeng; Sun, Xiaoquan; Jiang, Min; Sun, Haifeng; Wang, Shun; Ma, Liuzheng; Yang, Yatao; Hu, Jiandong
2018-06-06
Corn stalk lodging is caused by different factors, including severe wind storms, stalk cannibalization, and stalk rots, and it leads to yield loss. Determining how to rapidly evaluate corn lodging resistance will assist scientists in the field of crop breeding to understand the contributing factors in managing the moisture, chemical fertilizer, and weather conditions for corn growing. This study proposes a non-destructive and direction-insensitive method, using a strain sensor and two single axis angle sensors to measure the corn stalk lodging resistance in the field. An equivalent force whose direction is perpendicular to the stalk is utilized to evaluate the corn lodging properties when a pull force is applied on the corn stalk. A novel measurement device is designed to obtain the equivalent force with the coefficient of variation (CV) of 4.85%. Five corn varieties with two different planting densities are arranged to conduct the experiment using the novel measurement device. The experimental results show that the maximum equivalent force could reach up to 44 N. A strong relationship with the square of the correlation coefficient of 0.88 was obtained between the maximum equivalent forces and the corn field’s stalk lodging rates. Moreover, the stalk lodging angles corresponding to the different pull forces over a measurement time of 20 s shift monotonically along the equivalent forces. Thus, the non-destructive and direction-insensitive method is an excellent tool for rapid analysis of stalk lodging resistance in corn, providing critical information on in-situ lodging dynamics.
Association between Refractive Errors and Ocular Biometry in Iranian Adults
Hashemi, Hassan; Khabazkhoob, Mehdi; Emamian, Mohammad Hassan; Shariati, Mohammad; Miraftab, Mohammad; Yekta, Abbasali; Ostadimoghaddam, Hadi; Fotouhi, Akbar
2015-01-01
Purpose: To investigate the association between ocular biometrics such as axial length (AL), anterior chamber depth (ACD), lens thickness (LT), vitreous chamber depth (VCD) and corneal power (CP) with different refractive errors. Methods: In a cross-sectional study on the 40 to 64-year-old population of Shahroud, random cluster sampling was performed. Ocular biometrics were measured using the Allegro Biograph (WaveLight AG, Erlangen, Germany) for all participants. Refractive errors were determined using cycloplegic refraction. Results: In the first model, the strongest correlations were found between spherical equivalent with axial length and corneal power. Spherical equivalent was strongly correlated with axial length in high myopic and high hyperopic cases, and with corneal power in high hyperopic cases; 69.5% of variability in spherical equivalent was attributed to changes in these variables. In the second model, the correlations between vitreous chamber depth and corneal power with spherical equivalent were stronger in myopes than hyperopes, while the correlations between lens thickness and anterior chamber depth with spherical equivalent were stronger in hyperopic cases than myopic ones. In the third model, anterior chamber depth + lens thickness correlated with spherical equivalent only in moderate and severe cases of hyperopia, and this index was not correlated with spherical equivalent in moderate to severe myopia. Conclusion: In individuals aged 40-64 years, corneal power and axial length make the greatest contribution to spherical equivalent in high hyperopia and high myopia. Anterior segment biometric components have a more important role in hyperopia than myopia. PMID:26730304
A formal and data-based comparison of measures of motor-equivalent covariation.
Verrel, Julius
2011-09-15
Different analysis methods have been developed for assessing motor-equivalent organization of movement variability. In the uncontrolled manifold (UCM) method, the structure of variability is analyzed by comparing goal-equivalent and non-goal-equivalent variability components at the level of elemental variables (e.g., joint angles). In contrast, in the covariation by randomization (CR) approach, motor-equivalent organization is assessed by comparing variability at the task level between empirical and decorrelated surrogate data. UCM effects can be due to both covariation among elemental variables and selective channeling of variability to elemental variables with low task sensitivity ("individual variation"), suggesting a link between the UCM and CR method. However, the precise relationship between the notion of covariation in the two approaches has not been analyzed in detail yet. Analysis of empirical and simulated data from a study on manual pointing shows that in general the two approaches are not equivalent, but the respective covariation measures are highly correlated (ρ > 0.7) for two proposed definitions of covariation in the UCM context. For one-dimensional task spaces, a formal comparison is possible and in fact the two notions of covariation are equivalent. In situations in which individual variation does not contribute to UCM effects, for which necessary and sufficient conditions are derived, this entails the equivalence of the UCM and CR analysis. Implications for the interpretation of UCM effects are discussed. Copyright © 2011 Elsevier B.V. All rights reserved.
NASA Astrophysics Data System (ADS)
Vanmarcke, Erik
1983-03-01
Random variation over space and time is one of the few attributes that might safely be predicted as characterizing almost any given complex system. Random fields or "distributed disorder systems" confront astronomers, physicists, geologists, meteorologists, biologists, and other natural scientists. They appear in the artifacts developed by electrical, mechanical, civil, and other engineers. They even underlie the processes of social and economic change. The purpose of this book is to bring together existing and new methodologies of random field theory and indicate how they can be applied to these diverse areas where a "deterministic treatment is inefficient and conventional statistics insufficient." Many new results and methods are included. After outlining the extent and characteristics of the random field approach, the book reviews the classical theory of multidimensional random processes and introduces basic probability concepts and methods in the random field context. It next gives a concise amount of the second-order analysis of homogeneous random fields, in both the space-time domain and the wave number-frequency domain. This is followed by a chapter on spectral moments and related measures of disorder and on level excursions and extremes of Gaussian and related random fields. After developing a new framework of analysis based on local averages of one-, two-, and n-dimensional processes, the book concludes with a chapter discussing ramifications in the important areas of estimation, prediction, and control. The mathematical prerequisite has been held to basic college-level calculus.
Surface plasmon enhanced cell microscopy with blocked random spatial activation
NASA Astrophysics Data System (ADS)
Son, Taehwang; Oh, Youngjin; Lee, Wonju; Yang, Heejin; Kim, Donghyun
2016-03-01
We present surface plasmon enhanced fluorescence microscopy with random spatial sampling using patterned block of silver nanoislands. Rigorous coupled wave analysis was performed to confirm near-field localization on nanoislands. Random nanoislands were fabricated in silver by temperature annealing. By analyzing random near-field distribution, average size of localized fields was found to be on the order of 135 nm. Randomly localized near-fields were used to spatially sample F-actin of J774 cells (mouse macrophage cell-line). Image deconvolution algorithm based on linear imaging theory was established for stochastic estimation of fluorescent molecular distribution. The alignment between near-field distribution and raw image was performed by the patterned block. The achieved resolution is dependent upon factors including the size of localized fields and estimated to be 100-150 nm.
New Quantum Diffusion Monte Carlo Method for strong field time dependent problems
NASA Astrophysics Data System (ADS)
Kalinski, Matt
2017-04-01
We have recently formulated the Quantum Diffusion Quantum Monte Carlo (QDMC) method for the solution of the time-dependent Schrödinger equation when it is equivalent to the reaction-diffusion system coupled by the highly nonlinear potentials of the type of Shay. Here we formulate a new Time Dependent QDMC method free of the nonlinearities described by the constant stochastic process of the coupled diffusion with transmutation. As before two kinds of diffusing particles (color walkers) are considered but which can further also transmute one into the other. Each of the species undergoes the hypothetical Einstein random walk progression with transmutation. The progressed particles transmute into the particles of the other kind before contributing to or annihilating the other particles density. This fully emulates the Time Dependent Schrödinger equation for any number of quantum particles. The negative sign of the real and the imaginary parts of the wave function is handled by the ``spinor'' densities carrying the sign as the degree of freedom. We apply the method for the exact time-dependent observation of our discovered two-electron Langmuir configurations in the magnetic and circularly polarized fields.
Tan, Ken; Latty, Tanya; Dong, Shihao; Liu, Xiwen; Wang, Chao; Oldroyd, Benjamin P
2015-11-09
Animals may adjust their behavior according to their perception of risk. Here we show that free-flying honey bee (Apis cerana) foragers mitigate the risk of starvation in the field when foraging on a food source that offers variable rewards by carrying more 'fuel' food on their outward journey. We trained foragers to a feeder located 1.2 km from each of four colonies. On average foragers carried 12.7% greater volume of fuel, equivalent to 30.2% more glucose when foraging on a variable source (a random sequence of 0.5, 1.5 and 2.5 M sucrose solution, average sucrose content 1.5 M) than when forging on a consistent source (constant 1.5 M sucrose solution). Our findings complement an earlier study that showed that foragers decrease their fuel load as they become more familiar with a foraging place. We suggest that honey bee foragers are risk sensitive, and carry more fuel to minimize the risk of starvation in the field when a foraging trip is perceived as being risky, either because the forager is unfamiliar with the foraging site, or because the forage available at a familiar site offers variable rewards.
Spatial Factors in the Integration of Speed Information
NASA Technical Reports Server (NTRS)
Verghese, P.; Stone, L. S.; Hargens, Alan R. (Technical Monitor)
1995-01-01
We reported that, for a 21FC task with multiple Gabor patches in each interval, thresholds for speed discrimination decreased with the number of patches, while simply increasing the area of a single patch produced no such effect. This result could be explained by multiple patches reducing spatial uncertainty. However, the fact that thresholds decrease with number even when the patches are in fixed positions argues against this explanation. We therefore performed additional experiments to explore the lack of an area effect. Three observers did a 21FC speed discrimination task with 6 Gabor patches in each interval, and were asked to pick the interval in which the gratings moved faster. The 50% contrast patches were placed on a circle at 4 deg. eccentricity, either equally spaced and maximally separated (hexagonal array), or closely-spaced, in consecutive positions (string of pearls). For the string-of-pearls condition, the grating phases were either random, or consistent with a full-field grating viewed through multiple Gaussian windows. When grating phases were random, the thresholds for the hexagonal and string-of-pearls layouts were indistinguishable. For the string-of-pearls layout, thresholds in the consistent-phase condition were higher by 15 +/- 6% than in the random-phase condition. (Thresholds increased by 57 +/- 7% in going from 6 patches to a single patch of equivalent area.). For random-phase patches, the lower thresholds for 6 patches does not depend on a specific spacing or spatial layout. Multiple, closely-spaced, consistent-phase patches that can be interpreted as a single grating, result in thresholds closer to that produced by a single patch. Together, our results suggest that object segmentation may play a role in the integration of speed information.
Parravano, Antonio; Noguera, José A.; Hermida, Paula; Tena-Sánchez, Jordi
2015-01-01
Models of social influence have explored the dynamics of social contagion, imitation, and diffusion of different types of traits, opinions, and conducts. However, few behavioral data indicating social influence dynamics have been obtained from direct observation in “natural” social contexts. The present research provides that kind of evidence in the case of the public expression of political preferences in the city of Barcelona, where thousands of citizens supporting the secession of Catalonia from Spain have placed a Catalan flag in their balconies and windows. Here we present two different studies. 1) During July 2013 we registered the number of flags in 26% of the electoral districts in the city of Barcelona. We find that there is a large dispersion in the density of flags in districts with similar density of pro-independence voters. However, by comparing the moving average to the global mean we find that the density of flags tends to be fostered in electoral districts where there is a clear majority of pro-independence vote, while it is inhibited in the opposite cases. We also show that the distribution of flags in the observed districts deviates significantly from that of an equivalent random distribution. 2) During 17 days around Catalonia’s 2013 national holiday we observed the position at balcony resolution of the flags displayed in the facades of a sub-sample of 82 blocks. We compare the ‘clustering index’ of flags on the facades observed each day to thousands of equivalent random distributions. Again we provide evidence that successive hangings of flags are not independent events but that a local influence mechanism is favoring their clustering. We also find that except for the national holiday day the density of flags tends to be fostered in facades located in electoral districts where there is a clear majority of pro-independence vote. PMID:25961562
Formal and physical equivalence in two cases in contemporary quantum physics
NASA Astrophysics Data System (ADS)
Fraser, Doreen
2017-08-01
The application of analytic continuation in quantum field theory (QFT) is juxtaposed to T-duality and mirror symmetry in string theory. Analytic continuation-a mathematical transformation that takes the time variable t to negative imaginary time-it-was initially used as a mathematical technique for solving perturbative Feynman diagrams, and was subsequently the basis for the Euclidean approaches within mainstream QFT (e.g., Wilsonian renormalization group methods, lattice gauge theories) and the Euclidean field theory program for rigorously constructing non-perturbative models of interacting QFTs. A crucial difference between theories related by duality transformations and those related by analytic continuation is that the former are judged to be physically equivalent while the latter are regarded as physically inequivalent. There are other similarities between the two cases that make comparing and contrasting them a useful exercise for clarifying the type of argument that is needed to support the conclusion that dual theories are physically equivalent. In particular, T-duality and analytic continuation in QFT share the criterion for predictive equivalence that two theories agree on the complete set of expectation values and the mass spectra and the criterion for formal equivalence that there is a "translation manual" between the physically significant algebras of observables and sets of states in the two theories. The analytic continuation case study illustrates how predictive and formal equivalence are compatible with physical inequivalence, but not in the manner of standard underdetermination cases. Arguments for the physical equivalence of dual theories must cite considerations beyond predictive and formal equivalence. The analytic continuation case study is an instance of the strategy of developing a physical theory by extending the formal or mathematical equivalence with another physical theory as far as possible. That this strategy has resulted in developments in pure mathematics as well as theoretical physics is another feature that this case study has in common with dualities in string theory.
Wave Propagation inside Random Media
NASA Astrophysics Data System (ADS)
Cheng, Xiaojun
This thesis presents results of studies of wave scattering within and transmission through random and periodic systems. The main focus is on energy profiles inside quasi-1D and 1D random media. The connection between transport and the states of the medium is manifested in the equivalence of the dimensionless conductance, g, and the Thouless number which is the ratio of the average linewidth and spacing of energy levels. This equivalence and theories regarding the energy profiles inside random media are based on the assumption that LDOS is uniform throughout the samples. We have conducted microwave measurements of the longitudinal energy profiles within disordered samples contained in a copper tube supporting multiple waveguide channels with an antenna moving along a slit on the tube. These measurements allow us to determine the local density of states (LDOS) at a location which is the sum of energy from all incoming channels on both sides. For diffusive samples, the LDOS is uniform and the energy profile decays linearly as expected. However, for localized samples, we find that the LDOS drops sharply towards the middle of the sample and the energy profile does not follow the result of the local diffusion theory where the LDOS is assumed to be uniform. We analyze the field spectra into quasi-normal modes and found that the mode linewidth and the number of modes saturates as the sample length increases. Thus the Thouless number saturates while the dimensionless conductance g continues to fall with increasing length, indicating that the modes are localized near the boundaries. This is in contrast to the general believing that g and Thouless number follow the same scaling behavior. Previous measurements show that single parameter scaling (SPS) still holds in the same sample where the LDOS is suppressed te{shi2014microwave}. We explore the extension of SPS to the interior of the sample by analyzing statistics of the logrithm of the energy density ln W(x) and found that =-x/l where l is the transport mean free path. The result does not depend on the sample length, which is counterintuitive yet remarkably simple. More supprisingly, the linear fall-off of energy profile holds for totally disordered random 1D layered samples in simulations where the LDOS is uniform as well as for single mode random waveguide experiments and 1D nearly periodic samples where the LDOS is suppressed in the middle of the sample. The generalization of the transmission matrix to the interior of quasi-1D random samples, which is defined as the field matrix, and its eigenvalues statistics are also discussed. The maximum energy deposition at a location is not the intensity of the first transmission eigenchannel but the eigenvalue of the first energy density eigenchannels at that cross section, which can be much greater than the average value. The contrast, which is the ratio of the intensity at the focused point to the background intensity, in optimal focusing is determined by the participation number of the energy density eigenvalues and its inverse gives the variance of the energy density at that cross section in a single configuration. We have also studied topological states in photonic structures. We have demonstrated robust propagation of electromagnetic waves along reconfigurable pathways within a topological photonic metacrystal. Since the wave is confined within the domain wall, which is the boundary between two distinct topological insulating systems, we can freely steer the wave by reconstructing the photonic structure. Other topics, such as speckle pattern evolutions and the effects of boundary conditions on the statistics of transmission eigenvalues and energy profiles are also discussed.
Doing better by getting worse: posthypnotic amnesia improves random number generation.
Terhune, Devin Blair; Brugger, Peter
2011-01-01
Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation.
Doing Better by Getting Worse: Posthypnotic Amnesia Improves Random Number Generation
Terhune, Devin Blair; Brugger, Peter
2011-01-01
Although forgetting is often regarded as a deficit that we need to control to optimize cognitive functioning, it can have beneficial effects in a number of contexts. We examined whether disrupting memory for previous numerical responses would attenuate repetition avoidance (the tendency to avoid repeating the same number) during random number generation and thereby improve the randomness of responses. Low suggestible and low dissociative and high dissociative highly suggestible individuals completed a random number generation task in a control condition, following a posthypnotic amnesia suggestion to forget previous numerical responses, and in a second control condition following the cancellation of the suggestion. High dissociative highly suggestible participants displayed a selective increase in repetitions during posthypnotic amnesia, with equivalent repetition frequency to a random system, whereas the other two groups exhibited repetition avoidance across conditions. Our results demonstrate that temporarily disrupting memory for previous numerical responses improves random number generation. PMID:22195022
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors
NASA Astrophysics Data System (ADS)
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α , the appropriate FRCG model has the effective range d =b2/N =α2/N , for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
Finite-range Coulomb gas models of banded random matrices and quantum kicked rotors.
Pandey, Akhilesh; Kumar, Avanish; Puri, Sanjay
2017-11-01
Dyson demonstrated an equivalence between infinite-range Coulomb gas models and classical random matrix ensembles for the study of eigenvalue statistics. We introduce finite-range Coulomb gas (FRCG) models via a Brownian matrix process, and study them analytically and by Monte Carlo simulations. These models yield new universality classes, and provide a theoretical framework for the study of banded random matrices (BRMs) and quantum kicked rotors (QKRs). We demonstrate that, for a BRM of bandwidth b and a QKR of chaos parameter α, the appropriate FRCG model has the effective range d=b^{2}/N=α^{2}/N, for large N matrix dimensionality. As d increases, there is a transition from Poisson to classical random matrix statistics.
After Decentralization: Delimitations and Possibilities within New Fields
ERIC Educational Resources Information Center
Wahlstrom, Ninni
2008-01-01
The shift from a centralized to a decentralized school system can be seen as a solution to an uncertain problem. Through analysing the displacements in the concept of equivalence within Sweden's decentralized school system, this study illustrates how the meaning of the concept of equivalence shifts over time, from a more collective target…
The toxic equivalency (TEQ) values of polychlorinated dibenzo-p-dioxins and polychlorinated dibenzofurans (PCDD/Fs) are predicted with a model based on the homologue concentrations measured from a laboratory-scale reactor (124 data points), a package boiler (61 data points), and ...
Psychiatric Resident and Attending Diagnostic and Prescribing Practices
ERIC Educational Resources Information Center
Tripp, Adam C.; Schwartz, Thomas L.
2008-01-01
Objective: This study investigates whether two patient population groups, under resident or attending treatment, are equivalent or different in the distribution of patient characteristics, diagnoses, or pharmacotherapy. Methods: Demographic data, psychiatric diagnoses, and pharmacotherapy data were collected for 100 random patient charts of…
NASA Astrophysics Data System (ADS)
Bi, Chuan-Xing; Hu, Ding-Yu; Zhang, Yong-Bin; Jing, Wen-Qian
2015-06-01
In previous studies, an equivalent source method (ESM)-based technique for recovering the free sound field in a noisy environment has been successfully applied to exterior problems. In order to evaluate its performance when applied to a more general noisy environment, that technique is used to identify active sources inside cavities where the sound field is composed of the field radiated by active sources and that reflected by walls. A patch approach with two semi-closed surfaces covering the target active sources is presented to perform the measurements, and the field that would be radiated by these target active sources into free space is extracted from the mixed field by using the proposed technique, which will be further used as the input of nearfield acoustic holography for source identification. Simulation and experimental results validate the effectiveness of the proposed technique for source identification in cavities, and show the feasibility of performing the measurements with a double layer planar array.
On the effect of acoustic coupling on random and harmonic plate vibrations
NASA Technical Reports Server (NTRS)
Frendi, A.; Robinson, J. H.
1993-01-01
The effect of acoustic coupling on random and harmonic plate vibrations is studied using two numerical models. In the coupled model, the plate response is obtained by integration of the nonlinear plate equation coupled with the nonlinear Euler equations for the surrounding acoustic fluid. In the uncoupled model, the nonlinear plate equation with an equivalent linear viscous damping term is integrated to obtain the response of the plate subject to the same excitation field. For a low-level, narrow-band excitation, the two models predict the same plate response spectra. As the excitation level is increased, the response power spectrum predicted by the uncoupled model becomes broader and more shifted towards the high frequencies than that obtained by the coupled model. In addition, the difference in response between the coupled and uncoupled models at high frequencies becomes larger. When a high intensity harmonic excitation is used, causing a nonlinear plate response, both models predict the same frequency content of the response. However, the level of the harmonics and subharmonics are higher for the uncoupled model. Comparisons to earlier experimental and numerical results show that acoustic coupling has a significant effect on the plate response at high excitation levels. Its absence in previous models may explain the discrepancy between predicted and measured responses.
Driven topological systems in the classical limit
NASA Astrophysics Data System (ADS)
Duncan, Callum W.; Öhberg, Patrik; Valiente, Manuel
2017-03-01
Periodically driven quantum systems can exhibit topologically nontrivial behavior, even when their quasienergy bands have zero Chern numbers. Much work has been conducted on noninteracting quantum-mechanical models where this kind of behavior is present. However, the inclusion of interactions in out-of-equilibrium quantum systems can prove to be quite challenging. On the other hand, the classical counterpart of hard-core interactions can be simulated efficiently via constrained random walks. The noninteracting model, proposed by Rudner et al. [Phys. Rev. X 3, 031005 (2013), 10.1103/PhysRevX.3.031005], has a special point for which the system is equivalent to a classical random walk. We consider the classical counterpart of this model, which is exact at a special point even when hard-core interactions are present, and show how these quantitatively affect the edge currents in a strip geometry. We find that the interacting classical system is well described by a mean-field theory. Using this we simulate the dynamics of the classical system, which show that the interactions play the role of Markovian, or time-dependent disorder. By comparing the evolution of classical and quantum edge currents in small lattices, we find regimes where the classical limit considered gives good insight into the quantum problem.
Magnetic stripes and skyrmions with helicity reversals.
Yu, Xiuzhen; Mostovoy, Maxim; Tokunaga, Yusuke; Zhang, Weizhu; Kimoto, Koji; Matsui, Yoshio; Kaneko, Yoshio; Nagaosa, Naoto; Tokura, Yoshinori
2012-06-05
It was recently realized that topological spin textures do not merely have mathematical beauty but can also give rise to unique functionalities of magnetic materials. An example is the skyrmion--a nano-sized bundle of noncoplanar spins--that by virtue of its nontrivial topology acts as a flux of magnetic field on spin-polarized electrons. Lorentz transmission electron microscopy recently emerged as a powerful tool for direct visualization of skyrmions in noncentrosymmetric helimagnets. Topologically, skyrmions are equivalent to magnetic bubbles (cylindrical domains) in ferromagnetic thin films, which were extensively explored in the 1970s for data storage applications. In this study we use Lorentz microscopy to image magnetic domain patterns in the prototypical magnetic oxide-M-type hexaferrite with a hint of scandium. Surprisingly, we find that the magnetic bubbles and stripes in the hexaferrite have a much more complex structure than the skyrmions and spirals in helimagnets, which we associate with the new degree of freedom--helicity (or vector spin chirality) describing the direction of spin rotation across the domain walls. We observe numerous random reversals of helicity in the stripe domain state. Random helicity of cylindrical domain walls coexists with the positional order of magnetic bubbles in a triangular lattice. Most unexpectedly, we observe regular helicity reversals inside skyrmions with an unusual multiple-ring structure.
NASA Technical Reports Server (NTRS)
1982-01-01
Experiments in Curie depth estimation from long wavelength magnetic anomalies are summarized. The heart of the work is equivalent-layer-type magnetization models derived by inversion of high-elevation, long wavelength magnetic anomaly data. The methodology is described in detail in the above references. A magnetization distribution in a thin equivalent layer at the Earth's surface having maximum detail while retaining physical significance, and giving rise to a synthetic anomaly field which makes a best fit to the observed field in a least squares sense is discussed. The apparent magnetization contrast in the equivalent layer is approximated using an array of dipoles distributed in equal area at the Earth's surface. The dipoles are pointed in the direction of the main magnetic field, which carries the implicit assumption that crustal magnetization is dominantly induced or viscous. The determination of the closest possible dipole spacing giving a stable inversion to a solution having physical significance is accomplished by plotting the standard deviation of the solution parameters against their spatial separation for a series of solutions.
NASA Astrophysics Data System (ADS)
Andresen, Juan Carlos; Katzgraber, Helmut G.; Schechter, Moshe
2017-12-01
Random fields disorder Ising ferromagnets by aligning single spins in the direction of the random field in three space dimensions, or by flipping large ferromagnetic domains at dimensions two and below. While the former requires random fields of typical magnitude similar to the interaction strength, the latter Imry-Ma mechanism only requires infinitesimal random fields. Recently, it has been shown that for dilute anisotropic dipolar systems a third mechanism exists, where the ferromagnetic phase is disordered by finite-size glassy domains at a random field of finite magnitude that is considerably smaller than the typical interaction strength. Using large-scale Monte Carlo simulations and zero-temperature numerical approaches, we show that this mechanism applies to disordered ferromagnets with competing short-range ferromagnetic and antiferromagnetic interactions, suggesting its generality in ferromagnetic systems with competing interactions and an underlying spin-glass phase. A finite-size-scaling analysis of the magnetization distribution suggests that the transition might be first order.
Evaluation of the NCPDP Structured and Codified Sig Format for e-prescriptions
Burkhart, Q; Bell, Douglas S
2011-01-01
Objective To evaluate the ability of the structure and code sets specified in the National Council for Prescription Drug Programs Structured and Codified Sig Format to represent ambulatory electronic prescriptions. Design We parsed the Sig strings from a sample of 20 161 de-identified ambulatory e-prescriptions into variables representing the fields of the Structured and Codified Sig Format. A stratified random sample of these representations was then reviewed by a group of experts. For codified Sig fields, we attempted to map the actual words used by prescribers to the equivalent terms in the designated terminology. Measurements Proportion of prescriptions that the Format could fully represent; proportion of terms used that could be mapped to the designated terminology. Results The fields defined in the Format could fully represent 95% of Sigs (95% CI 93% to 97%), but ambiguities were identified, particularly in representing multiple-step instructions. The terms used by prescribers could be codified for only 60% of dose delivery methods, 84% of dose forms, 82% of vehicles, 95% of routes, 70% of sites, 33% of administration timings, and 93% of indications. Limitations The findings are based on a retrospective sample of ambulatory prescriptions derived mostly from primary care physicians. Conclusion The fields defined in the Format could represent most of the patient instructions in a large prescription sample, but prior to its mandatory adoption, further work is needed to ensure that potential ambiguities are addressed and that a complete set of terms is available for the codified fields. PMID:21613642
Chopped random-basis quantum optimization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Caneva, Tommaso; Calarco, Tommaso; Montangero, Simone
2011-08-15
In this work, we describe in detail the chopped random basis (CRAB) optimal control technique recently introduced to optimize time-dependent density matrix renormalization group simulations [P. Doria, T. Calarco, and S. Montangero, Phys. Rev. Lett. 106, 190501 (2011)]. Here, we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Magnet/Hall-Effect Random-Access Memory
NASA Technical Reports Server (NTRS)
Wu, Jiin-Chuan; Stadler, Henry L.; Katti, Romney R.
1991-01-01
In proposed magnet/Hall-effect random-access memory (MHRAM), bits of data stored magnetically in Perm-alloy (or equivalent)-film memory elements and read out by using Hall-effect sensors to detect magnetization. Value of each bit represented by polarity of magnetization. Retains data for indefinite time or until data rewritten. Speed of Hall-effect sensors in MHRAM results in readout times of about 100 nanoseconds. Other characteristics include high immunity to ionizing radiation and storage densities of order 10(Sup6)bits/cm(Sup 2) or more.
De Oliveira, Gildasio S; Duncan, Kenyon; Fitzgerald, Paul; Nader, Antoun; Gould, Robert W; McCarthy, Robert J
2014-02-01
Few multimodal strategies to minimize postoperative pain and improve recovery have been examined in morbidly obese patients undergoing laparoscopic bariatric surgery. The main objective of this study was to evaluate the effect of systemic intraoperative lidocaine on postoperative quality of recovery when compared to saline. The study was a prospective randomized, double-blinded placebo-controlled clinical trial. Subjects undergoing laparoscopic bariatric surgery were randomized to receive lidocaine (1.5 mg/kg bolus followed by a 2 mg/kg/h infusion until the end of the surgical procedure) or the same volume of saline. The primary outcome was the quality of recovery 40 questionnaire at 24 h after surgery. Fifty-one subjects were recruited and 50 completed the study. The global QoR-40 scores at 24 h were greater in the lidocaine group median (IQR) of 165 (151 to 170) compared to the saline group, median (IQR) of 146 (130 to 169), P = 0.01. Total 24 h opioid consumption was lower in the lidocaine group, median (IQR) of 26 (19 to 46) mg IV morphine equivalents compared to the saline group, median (IQR) of 36 (24 to 65) mg IV morphine equivalents, P = 0.03. Linear regression demonstrated an inverse relationship between the total 24 h opioid consumption (IV morphine equivalents) and 24 h postoperative quality of recovery (P < 0.0001). Systemic lidocaine improves postoperative quality of recovery in patients undergoing laparoscopic bariatric surgery. Patients who received lidocaine had a lower opioid consumption which translated to a better quality of recovery.
Glickman, Marc; Gheissari, Ali; Money, Samuel; Martin, John; Ballard, Jeffrey L
2002-03-01
An experimental polymeric sealant (CoSeal [Cohesion Technologies, Palo Alto, Calif]) provides equivalent anastomotic sealing to Gelfoam (Upjohn, Kalamazoo, Mich)/thrombin during surgical placement of prosthetic vascular grafts. Randomized controlled trial. Nine university-affiliated medical centers. One hundred forty-eight patients scheduled for implantation of polytetrafluoroethylene grafts, mainly for infrainguinal revascularization procedures or the creation of dialysis access shunts, who were treated randomly with either an experimental intervention (n = 74) or control (n = 74). Following polytetrafluoroethylene graft placement, anastomotic suture hole bleeding was treated intraoperatively in all control subjects with Gelfoam/thrombin. Subjects in the experimental group had the polymeric sealant applied directly to the suture lines without concomitant manual compression. Primary treatment success was defined as the proportion of subjects in each group that achieved complete anastomotic sealing within 10 minutes. The proportion of subjects that achieved immediate sealing and the time required to fully inhibit suture hole bleeding also were compared between treatment groups. Overall 10-minute sealing success was equivalent (86% vs 80%; P =.29) between experimental and control subjects, respectively. However, subjects treated with CoSeal achieved immediate anastomotic sealing at more than twice the rate of subjects treated with Gelfoam/thrombin (47% vs 20%; P<.001). Consequently, the median time needed to inhibit bleeding in control subjects was more than 10 times longer than for experimental subjects (16.5 seconds vs 189.0 seconds; P =.01). Strikingly similar findings for all comparisons were observed separately for subgroups of subjects having infrainguinal bypass grafting and for those undergoing placement of dialysis access shunts. The experimental sealant offers equivalent anastomotic sealing performance compared with Gelfoam/thrombin, but it provides this desired effect in a significantly more rapid time frame.
A new neural network model for solving random interval linear programming problems.
Arjmandzadeh, Ziba; Safi, Mohammadreza; Nazemi, Alireza
2017-05-01
This paper presents a neural network model for solving random interval linear programming problems. The original problem involving random interval variable coefficients is first transformed into an equivalent convex second order cone programming problem. A neural network model is then constructed for solving the obtained convex second order cone problem. Employing Lyapunov function approach, it is also shown that the proposed neural network model is stable in the sense of Lyapunov and it is globally convergent to an exact satisfactory solution of the original problem. Several illustrative examples are solved in support of this technique. Copyright © 2017 Elsevier Ltd. All rights reserved.
Random walks exhibiting anomalous diffusion: elephants, urns and the limits of normality
NASA Astrophysics Data System (ADS)
Kearney, Michael J.; Martin, Richard J.
2018-01-01
A random walk model is presented which exhibits a transition from standard to anomalous diffusion as a parameter is varied. The model is a variant on the elephant random walk and differs in respect of the treatment of the initial state, which in the present work consists of a given number N of fixed steps. This also links the elephant random walk to other types of history dependent random walk. As well as being amenable to direct analysis, the model is shown to be asymptotically equivalent to a non-linear urn process. This provides fresh insights into the limiting form of the distribution of the walker’s position at large times. Although the distribution is intrinsically non-Gaussian in the anomalous diffusion regime, it gradually reverts to normal form when N is large under quite general conditions.
On Pfaffian Random Point Fields
NASA Astrophysics Data System (ADS)
Kargin, V.
2014-02-01
We study Pfaffian random point fields by using the Moore-Dyson quaternion determinants. First, we give sufficient conditions that ensure that a self-dual quaternion kernel defines a valid random point field, and then we prove a CLT for Pfaffian point fields. The proofs are based on a new quaternion extension of the Cauchy-Binet determinantal identity. In addition, we derive the Fredholm determinantal formulas for the Pfaffian point fields which use the quaternion determinant.
Assessment of applications of transport models on regional scale solute transport
NASA Astrophysics Data System (ADS)
Guo, Z.; Fogg, G. E.; Henri, C.; Pauloo, R.
2017-12-01
Regional scale transport models are needed to support the long-term evaluation of groundwater quality and to develop management strategies aiming to prevent serious groundwater degradation. The purpose of this study is to evaluate the capacity of previously-developed upscaling approaches to accurately describe main solute transport processes including the capture of late-time tails under changing boundary conditions. Advective-dispersive contaminant transport in a 3D heterogeneous domain was simulated and used as a reference solution. Equivalent transport under homogeneous flow conditions were then evaluated applying the Multi-Rate Mass Transfer (MRMT) model. The random walk particle tracking method was used for both heterogeneous and homogeneous-MRMT scenarios under steady state and transient conditions. The results indicate that the MRMT model can capture the tails satisfactorily for plume transported with ambient steady-state flow field. However, when boundary conditions change, the mass transfer model calibrated for transport under steady-state conditions cannot accurately reproduce the tailing effect observed for the heterogeneous scenario. The deteriorating impact of transient boundary conditions on the upscaled model is more significant for regions where flow fields are dramatically affected, highlighting the poor applicability of the MRMT approach for complex field settings. Accurately simulating mass in both mobile and immobile zones is critical to represent the transport process under transient flow conditions and will be the future focus of our study.
Double row equivalent for rotator cuff repair: A biomechanical analysis of a new technique.
Robinson, Sean; Krigbaum, Henry; Kramer, Jon; Purviance, Connor; Parrish, Robin; Donahue, Joseph
2018-06-01
There are numerous configurations of double row fixation for rotator cuff tears however, there remains to be a consensus on the best method. In this study, we evaluated three different double-row configurations, including a new method. Our primary question is whether the new anchor and technique compares in biomechanical strength to standard double row techniques. Eighteen prepared fresh frozen bovine infraspinatus tendons were randomized to one of three groups including the New Double Row Equivalent, Arthrex Speedbridge and a transosseous equivalent using standard Stabilynx anchors. Biomechanical testing was performed on humeri sawbones and ultimate load, strain, yield strength, contact area, contact pressure, and a survival plots were evaluated. The new double row equivalent method demonstrated increased survival as well as ultimate strength at 415N compared to the remainder testing groups as well as equivalent contact area and pressure to standard double row techniques. This new anchor system and technique demonstrated higher survival rates and loads to failure than standard double row techniques. This data provides us with a new method of rotator cuff fixation which should be further evaluated in the clinical setting. Basic science biomechanical study.
Absorption and scattering by fractal aggregates and by their equivalent coated spheres
NASA Astrophysics Data System (ADS)
Kandilian, Razmig; Heng, Ri-Liang; Pilon, Laurent
2015-01-01
This paper demonstrates that the absorption and scattering cross-sections and the asymmetry factor of randomly oriented fractal aggregates of spherical monomers can be rapidly estimated as those of coated spheres with equivalent volume and average projected area. This was established for fractal aggregates with fractal dimension ranging from 2.0 to 3.0 and composed of up to 1000 monodisperse or polydisperse monomers with a wide range of size parameter and relative complex index of refraction. This equivalent coated sphere approximation was able to capture the effects of both multiple scattering and shading among constituent monomers on the integral radiation characteristics of the aggregates. It was shown to be superior to the Rayleigh-Debye-Gans approximation and to the equivalent coated sphere approximation proposed by Latimer. However, the scattering matrix element ratios of equivalent coated spheres featured large angular oscillations caused by internal reflection in the coating which were not observed in those of the corresponding fractal aggregates. Finally, the scattering phase function and the scattering matrix elements of aggregates with large monomer size parameter were found to have unique features that could be used in remote sensing applications.
Computer-Based Linguistic Analysis.
ERIC Educational Resources Information Center
Wright, James R.
Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…
Estimation of hysteretic damping of structures by stochastic subspace identification
NASA Astrophysics Data System (ADS)
Bajrić, Anela; Høgsberg, Jan
2018-05-01
Output-only system identification techniques can estimate modal parameters of structures represented by linear time-invariant systems. However, the extension of the techniques to structures exhibiting non-linear behavior has not received much attention. This paper presents an output-only system identification method suitable for random response of dynamic systems with hysteretic damping. The method applies the concept of Stochastic Subspace Identification (SSI) to estimate the model parameters of a dynamic system with hysteretic damping. The restoring force is represented by the Bouc-Wen model, for which an equivalent linear relaxation model is derived. Hysteretic properties can be encountered in engineering structures exposed to severe cyclic environmental loads, as well as in vibration mitigation devices, such as Magneto-Rheological (MR) dampers. The identification technique incorporates the equivalent linear damper model in the estimation procedure. Synthetic data, representing the random vibrations of systems with hysteresis, validate the estimated system parameters by the presented identification method at low and high-levels of excitation amplitudes.
The Happiest thought of Einstein's Life
NASA Astrophysics Data System (ADS)
Heller, Michael
Finally, let us have a closer look at the place of the equivalence principle in the logical scheme of Einstein's general relativity theory. First, Einstein new well, from Minkowski's geometric formulation of his own special relativity, that accelerated motions should be represented as curved lines in a flat space-time. Second, the Galileo principle asserts that all bodies are accelerated in the same way in a given gravitational field, and consequently their motions are represented in the flat space-time by curved lines, all exactly in the same way. Third, since all lines representing free motions are curved exactly in the same way in the flat space-time, one can say that the lines remain straight (as far as possible) but the space-time itself becomes curved. Fourth, and last, since acceleration is (locally) equivalent to a gravitational field (here we have the equivalence principle), one is entitled to assert that it is the gravitational field (and not acceleration) that is represented as the curvature of space-time. This looks almost like an Aristotelian syllogism. However, to put all the pieces of evidence into the logical chain took Einstein a few years of hard thinking. The result has been incorporated into the field equations which quantitatively show how the curvature of space-time and gravity are linked together.
Development of a Random Field Model for Gas Plume Detection in Multiple LWIR Images.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heasler, Patrick G.
This report develops a random field model that describes gas plumes in LWIR remote sensing images. The random field model serves as a prior distribution that can be combined with LWIR data to produce a posterior that determines the probability that a gas plume exists in the scene and also maps the most probable location of any plume. The random field model is intended to work with a single pixel regression estimator--a regression model that estimates gas concentration on an individual pixel basis.
Tensor Minkowski Functionals for random fields on the sphere
NASA Astrophysics Data System (ADS)
Chingangbam, Pravabati; Yogendran, K. P.; Joby, P. K.; Ganesan, Vidhya; Appleby, Stephen; Park, Changbom
2017-12-01
We generalize the translation invariant tensor-valued Minkowski Functionals which are defined on two-dimensional flat space to the unit sphere. We apply them to level sets of random fields. The contours enclosing boundaries of level sets of random fields give a spatial distribution of random smooth closed curves. We outline a method to compute the tensor-valued Minkowski Functionals numerically for any random field on the sphere. Then we obtain analytic expressions for the ensemble expectation values of the matrix elements for isotropic Gaussian and Rayleigh fields. The results hold on flat as well as any curved space with affine connection. We elucidate the way in which the matrix elements encode information about the Gaussian nature and statistical isotropy (or departure from isotropy) of the field. Finally, we apply the method to maps of the Galactic foreground emissions from the 2015 PLANCK data and demonstrate their high level of statistical anisotropy and departure from Gaussianity.
Current matrix element in HAL QCD's wavefunction-equivalent potential method
NASA Astrophysics Data System (ADS)
Watanabe, Kai; Ishii, Noriyoshi
2018-04-01
We give a formula to calculate a matrix element of a conserved current in the effective quantum mechanics defined by the wavefunction-equivalent potentials proposed by the HAL QCD collaboration. As a first step, a non-relativistic field theory with two-channel coupling is considered as the original theory, with which a wavefunction-equivalent HAL QCD potential is obtained in a closed analytic form. The external field method is used to derive the formula by demanding that the result should agree with the original theory. With this formula, the matrix element is obtained by sandwiching the effective current operator between the left and right eigenfunctions of the effective Hamiltonian associated with the HAL QCD potential. In addition to the naive one-body current, the effective current operator contains an additional two-body term emerging from the degrees of freedom which has been integrated out.
Establishing Substantial Equivalence: Metabolomics
NASA Astrophysics Data System (ADS)
Beale, Michael H.; Ward, Jane L.; Baker, John M.
Modern ‘metabolomic’ methods allow us to compare levels of many structurally diverse compounds in an automated fashion across a large number of samples. This technology is ideally suited to screening of populations of plants, including trials where the aim is the determination of unintended effects introduced by GM. A number of metabolomic methods have been devised for the determination of substantial equivalence. We have developed a methodology, using [1H]-NMR fingerprinting, for metabolomic screening of plants and have applied it to the study of substantial equivalence of field-grown GM wheat. We describe here the principles and detail of that protocol as applied to the analysis of flour generated from field plots of wheat. Particular emphasis is given to the downstream data processing and comparison of spectra by multivariate analysis, from which conclusions regarding metabolome changes due to the GM can be assessed against the background of natural variation due to environment.
RCT: Module 2.06, Air Sampling Program and Methods, Course 8772
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hillmer, Kurt T.
The inhalation of radioactive particles is the largest cause of an internal radiation dose. Airborne radioactivity measurements are necessary to ensure that the control measures are and continue to be effective. Regulations govern the allowable effective dose equivalent to an individual. The effective dose equivalent is determined by combining the external and internal dose equivalent values. Typically, airborne radioactivity levels are maintained well below allowable levels to keep the total effective dose equivalent small. This course will prepare the student with the skills necessary for RCT qualification by passing quizzes, tests, and the RCT Comprehensive Phase 1, Unit 2 Examinationmore » (TEST 27566) and will provide in-the-field skills.« less
A petroleum discovery-rate forecast revisited-The problem of field growth
Drew, L.J.; Schuenemeyer, J.H.
1992-01-01
A forecast of the future rates of discovery of crude oil and natural gas for the 123,027-km2 Miocene/Pliocene trend in the Gulf of Mexico was made in 1980. This forecast was evaluated in 1988 by comparing two sets of data: (1) the actual versus the forecasted number of fields discovered, and (2) the actual versus the forecasted volumes of crude oil and natural gas discovered with the drilling of 1,820 wildcat wells along the trend between January 1, 1977, and December 31, 1985. The forecast specified that this level of drilling would result in the discovery of 217 fields containing 1.78 billion barrels of oil equivalent; however, 238 fields containing 3.57 billion barrels of oil equivalent were actually discovered. This underestimation is attributed to biases introduced by field growth and, to a lesser degree, the artificially low, pre-1970's price of natural gas that prevented many smaller gas fields from being brought into production at the time of their discovery; most of these fields contained less than 50 billion cubic feet of producible natural gas. ?? 1992 Oxford University Press.
Quantum Field Theory in (0 + 1) Dimensions
ERIC Educational Resources Information Center
Boozer, A. D.
2007-01-01
We show that many of the key ideas of quantum field theory can be illustrated simply and straightforwardly by using toy models in (0 + 1) dimensions. Because quantum field theory in (0 + 1) dimensions is equivalent to quantum mechanics, these models allow us to use techniques from quantum mechanics to gain insight into quantum field theory. In…
Perceptions of randomized security schedules.
Scurich, Nicholas; John, Richard S
2014-04-01
Security of infrastructure is a major concern. Traditional security schedules are unable to provide omnipresent coverage; consequently, adversaries can exploit predictable vulnerabilities to their advantage. Randomized security schedules, which randomly deploy security measures, overcome these limitations, but public perceptions of such schedules have not been examined. In this experiment, participants were asked to make a choice between attending a venue that employed a traditional (i.e., search everyone) or a random (i.e., a probability of being searched) security schedule. The absolute probability of detecting contraband was manipulated (i.e., 1/10, 1/4, 1/2) but equivalent between the two schedule types. In general, participants were indifferent to either security schedule, regardless of the probability of detection. The randomized schedule was deemed more convenient, but the traditional schedule was considered fairer and safer. There were no differences between traditional and random schedule in terms of perceived effectiveness or deterrence. Policy implications for the implementation and utilization of randomized schedules are discussed. © 2013 Society for Risk Analysis.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-08-25
... received. Decision: Approved. We know of no instruments of equivalent scientific value to the foreign... received. Decision: Approved. We know of no instruments of equivalent scientific value to the foreign... magnetic fields, which requires a special selection of non-magnetic materials the instrument has to be...
47 CFR 2.1053 - Measurements required: Field strength of spurious radiation.
Code of Federal Regulations, 2012 CFR
2012-10-01
... operation. Curves or equivalent data shall be supplied showing the magnitude of each harmonic and other.... For equipment operating on frequencies below 890 MHz, an open field test is normally required, with... either impractical or impossible to make open field measurements (e.g. a broadcast transmitter installed...
Sun, Rai Ko S.F.
1994-01-01
A device for measuring dose equivalents in neutron radiation fields. The device includes nested symmetrical hemispheres (forming spheres) of different neutron moderating materials that allow the measurement of dose equivalents from 0.025 eV to past 1 GeV. The layers of moderating material surround a spherical neutron counter. The neutron counter is connected by an electrical cable to an electrical sensing means which interprets the signal from the neutron counter in the center of the moderating spheres. The spherical shape of the device allows for accurate measurement of dose equivalents regardless of its positioning.
Dynamics of glass-forming liquids. XVIII. Does entropy control structural relaxation times?
NASA Astrophysics Data System (ADS)
Samanta, Subarna; Richert, Ranko
2015-01-01
We study the dielectric dynamics of viscous glycerol in the presence of a large bias field. Apart from dielectric saturation and polarization anisotropy, we observe that the steady state structural relaxation time is longer by 2.7% in the presence of a 225 kV/cm dc-field relative to the linear response counterpart, equivalent to a field induced glass transition (Tg) shift of +84 mK. This result compares favorably with the 3.0% time constant increase predicted on the basis of a recent report [G. P. Johari, J. Chem. Phys. 138, 154503 (2013)], where the field induced reduction of the configurational entropy translates into slower dynamics by virtue of the Adam-Gibbs relation. Other models of field dependent glass transition temperatures are also discussed. Similar to observations related to the electro-optical Kerr effect, the rise time of the field induced effect is much longer than its collapse when the field is removed again. The orientational relaxation time of the plastic crystal cyclo-octanol is more sensitive to a bias field, showing a 13.5% increase at a field of 150 kV/cm, equivalent to an increase of Tg by 0.58 K.
Global mean-field phase diagram of the spin-1 Ising ferromagnet in a random crystal field
NASA Astrophysics Data System (ADS)
Borelli, M. E. S.; Carneiro, C. E. I.
1996-02-01
We study the phase diagram of the mean-field spin-1 Ising ferromagnet in a uniform magnetic field H and a random crystal field Δi, with probability distribution P( Δi) = pδ( Δi - Δ) + (1 - p) δ( Δi). We analyse the effects of randomness on the first-order surfaces of the Δ- T- H phase diagram for different values of the concentration p and show how these surfaces are affected by the dilution of the crystal field.
226-237 E. Ontario, April 2016, Lindsay Light Radiological Survey
Field gamma measurements did not exceed the field instrument threshold equivalent to the USEPA removal actionlevel and ranged from a minimum of 6,000 cpm to a maximum of approximately 9,000 cpm unshielded.
330-334 E. Ontario, April 2016, Lindsay Light Radiological Survey
Field gamma measurement did not exceed the field instrument threshold equivalent to the USEPA removal actionlevel and ranged from a minimum of 5,700 cpm to a maximum of approximately 13,100 cpm unshielded.
Tavakkolizadeh, Moein; Love‐Jones, Sarah; Patel, Nikunj K.; Gu, Jianwen Wendy; Bains, Amarpreet; Doan, Que; Moffitt, Michael
2017-01-01
Objective The PROCO RCT is a multicenter, double‐blind, crossover, randomized controlled trial (RCT) that investigated the effects of rate on analgesia in kilohertz frequency (1–10 kHz) spinal cord stimulation (SCS). Materials and Methods Patients were implanted with SCS systems and underwent an eight‐week search to identify the best location (“sweet spot”) of stimulation at 10 kHz within the searched region (T8–T11). An electronic diary (e‐diary) prompted patients for pain scores three times per day. Patients who responded to 10 kHz per e‐diary numeric rating scale (ED‐NRS) pain scores proceeded to double‐blind rate randomization. Patients received 1, 4, 7, and 10 kHz SCS at the same sweet spot found for 10 kHz in randomized order (four weeks at each frequency). For each frequency, pulse width and amplitude were titrated to optimize therapy. Results All frequencies provided equivalent pain relief as measured by ED‐NRS (p ≤ 0.002). However, mean charge per second differed across frequencies, with 1 kHz SCS requiring 60–70% less charge than higher frequencies (p ≤ 0.0002). Conclusions The PROCO RCT provides Level I evidence for equivalent pain relief from 1 to 10 kHz with appropriate titration of pulse width and amplitude. 1 kHz required significantly less charge than higher frequencies. PMID:29220121
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity
NASA Technical Reports Server (NTRS)
Fu, L. S.; Mura, T.
1983-01-01
The determination of the elastodynamic fields of an ellipsoidal inhomogeneity is studied in detail via the eigenstrain approach. A complete formulation and a treatment of both types of eigenstrains for equivalence between the inhomogeneity problem and the inclusion problem are given. This approach is shown to be mathematically identical to other approaches such as the direct volume integral formulation. Expanding the eigenstrains and applied strains in the polynomial form in the position vector and satisfying the equivalence conditions at every point, the governing simultaneous algebraic equations for the unknown coefficients in the eigenstrain expansion are derived. The elastodynamic field outside an ellipsoidal inhomogeneity in a linear elastic isotropic medium is given as an example. The angular and frequency dependence of the induced displacement field, as well as the differential and total cross sections are formally given in series expansion form for the case of uniformly distributed eigenstrains.
Polynuclear aromatic hydrocarbon analysis using the synchronous scanning luminoscope
NASA Astrophysics Data System (ADS)
Hyfantis, George J., Jr.; Teglas, Matthew S.; Wilbourn, Robert G.
2001-02-01
12 The Synchronous Scanning Luminoscope (SSL) is a field- portable, synchronous luminescence spectrofluorometer that was developed for on-site analysis of contaminated soil and ground water. The SSL is capable of quantitative analysis of total polynuclear aromatic hydrocarbons (PAHs) using phosphorescence and fluorescence techniques with a high correlation to laboratory data as illustrated by this study. The SSL is also capable of generating benzo(a)pyrene equivalency results, based on seven carcinogenic PAHs and Navy risk numbers, with a high correlation to laboratory data as illustrated by this study. These techniques allow rapid field assessments of total PAHs and benzo(a)pyrene equivalent concentrations. The Luminoscope is capable of detecting total PAHs to the parts per billion range. This paper describes standard field methods for using the SSL and describes the results of field/laboratory testing of PAHs. SSL results from two different hazardous waste sites are discussed.
Examining the impact of cell phone conversations on driving using meta-analytic techniques
DOT National Transportation Integrated Search
2006-01-01
Synopsis Younger and older drivers conversing on a hands-free cell phone were found to have slower responses to random braking by the vehicle ahead. Cell phone use slowed the younger drivers responses to an extent that they were equivalent t...
Profiles in driver distraction : effects of cell phone conversations on younger and older drivers
DOT National Transportation Integrated Search
2004-01-01
Synopsis Younger and older drivers conversing on a hands-free cell phone were found to have slower responses to random braking by the vehicle ahead. Cell phone use slowed the younger drivers responses to an extent that they were equivalent t...
The ED95 of Nalbuphine in Outpatient-Induced Abortion Compared to Equivalent Sufentanil.
Chen, Limei; Zhou, Yamei; Cai, Yaoyao; Bao, Nana; Xu, Xuzhong; Shi, Beibei
2018-04-07
This prospective study evaluated the 95% effective dose (ED 95 ) of nalbuphine in inhibiting body movement during outpatient-induced abortion and its clinical efficacy versus the equivalent of sufentanil. The study was divided into two parts. For the first part, voluntary first-trimester patients who needed induced abortions were recruited to measure the ED 95 of nalbuphine in inhibiting body movement during induced abortion using the sequential method (the Dixon up-and-down method). In the second part, this was a double-blind, randomized study. Sixty cases of first-trimester patients were recruited and were randomly divided into two groups (n = 30), including group N (nalbuphine at the ED 95 dose) and group S (sufentanil at an equivalent dose). Propofol was given to both groups as the sedative. The circulation, respiration and body movement of the two groups in surgery were observed. The amount of propofol, the awakening time, the time to leave the hospital and the analgesic effect were recorded. The ED 95 of nalbuphine in inhibiting body movement during painless surgical abortion was 0.128 mg/kg (95% confidence intervals 0.098-0.483 mg/kg). Both nalbuphine and the equivalent dose of sufentanil provided a good intraoperative and post-operative analgesic effect in outpatient-induced abortion. However, the post-operative morbidity of dizziness for nalbuphine was less than for sufentanil (p < 0.05), and the awakening time and the time to leave the hospital were significantly shorter than those of sufentanil (p < 0.05). Nalbuphine at 0.128 mg/kg was used in outpatient-induced abortion as an intraoperative and post-operative analgesic and showed a better effect compared with sufentanil. © 2018 Nordic Association for the Publication of BCPT (former Nordic Pharmacological Society).
Kleinman, L; Leidy, N K; Crawley, J; Bonomi, A; Schoenfeld, P
2001-02-01
Although most health-related quality of life questionnaires are self-administered by means of paper and pencil, new technologies for automated computer administration are becoming more readily available. Novel methods of instrument administration must be assessed for score equivalence in addition to consistency in reliability and validity. The present study compared the psychometric characteristics (score equivalence and structure, internal consistency, and reproducibility reliability and construct validity) of the Quality of Life in Reflux And Dyspepsia (QOLRAD) questionnaire when self-administered by means of paper and pencil versus touch-screen computer. The influence of age, education, and prior experience with computers on score equivalence was also examined. This crossover trial randomized 134 patients with gastroesophageal reflux disease to 1 of 2 groups: paper-and-pencil questionnaire administration followed by computer administration or computer administration followed by use of paper and pencil. To minimize learning effects and respondent fatigue, administrations were scheduled 3 days apart. A random sample of 32 patients participated in a 1-week reproducibility evaluation of the computer-administered QOLRAD. QOLRAD scores were equivalent across the 2 methods of administration regardless of subject age, education, and prior computer use. Internal consistency levels were very high (alpha = 0.93-0.99). Interscale correlations were strong and generally consistent across methods (r = 0.7-0.87). Correlations between the QOLRAD and Short Form 36 (SF-36) were high, with no significant differences by method. Test-retest reliability of the computer-administered QOLRAD was also very high (ICC = 0.93-0.96). Results of the present study suggest that the QOLRAD is reliable and valid when self-administered by means of computer touch-screen or paper and pencil.
Hanes, Vladimir; Chow, Vincent; Zhang, Nan; Markus, Richard
2017-05-01
This study compared the pharmacokinetic (PK) profiles of the proposed biosimilar ABP 980 and trastuzumab in healthy males. In this single-blind study, 157 healthy males were randomized 1:1:1 to a single 6 mg/kg intravenous infusion of ABP 980, FDA-licensed trastuzumab [trastuzumab (US)], or EU-authorized trastuzumab [trastuzumab (EU)]. Primary endpoints were area under the serum concentration-time curve from time 0 to infinity (AUC inf ) and maximum observed serum concentration (C max ). To establish equivalence, the geometric mean ratio (GMR) and 90% confidence interval (CI) for C max and AUC inf had to be within the equivalence criteria of 0.80-1.25. The GMRs and 90% CIs for C max and AUC inf , respectively, were: 1.04 (0.99-1.08) and 1.06 (1.00-1.12) for ABP 980 versus trastuzumab (US); 0.99 (0.95-1.03) and 1.00 (0.95-1.06) for ABP 980 versus trastuzumab (EU); and 0.96 (0.92-1.00) and 0.95 (0.90-1.01) for trastuzumab (US) versus trastuzumab (EU). All comparisons were within the equivalence criteria of 0.80-1.25. Treatment-emergent adverse events (TEAEs) were reported in 84.0, 75.0, and 78.2 of subjects in the ABP 980, trastuzumab (US), and trastuzumab (EU) groups, respectively. There were no deaths or TEAEs leading to study discontinuation and no binding or neutralizing anti-drug anti-bodies were detected. This study demonstrated the PK similarity of ABP 980 to both trastuzumab (US) and trastuzumab (EU), and of trastuzumab (US) to trastuzumab (EU). No differences in safety and tolerability between treatments were noted; no subject tested positive for binding anti-bodies.
Vial, Philip; Gustafsson, Helen; Oliver, Lyn; Baldock, Clive; Greer, Peter B
2009-12-07
The routine use of electronic portal imaging devices (EPIDs) as dosimeters for radiotherapy quality assurance is complicated by the non-water equivalence of the EPID's dose response. A commercial EPID modified to a direct-detection configuration was previously demonstrated to provide water-equivalent dose response with d(max) solid water build-up and 10 cm solid water backscatter. Clinical implementation of the direct EPID (dEPID) requires a design that maintains the water-equivalent dose response, can be incorporated onto existing EPID support arms and maintains sufficient image quality for clinical imaging. This study investigated the dEPID dose response with different configurations of build-up and backscatter using varying thickness of solid water and copper. Field size output factors and beam profiles measured with the dEPID were compared with ionization chamber measurements of dose in water for both 6 MV and 18 MV. The dEPID configured with d(max) solid water build-up and no backscatter (except for the support arm) was within 1.5% of dose in water data for both energies. The dEPID was maintained in this configuration for clinical dosimetry and image quality studies. Close agreement between the dEPID and treatment planning system was obtained for an IMRT field with 98.4% of pixels within the field meeting a gamma criterion of 3% and 3 mm. The reduced sensitivity of the dEPID resulted in a poorer image quality based on quantitative (contrast-to-noise ratio) and qualitative (anthropomorphic phantom) studies. However, clinically useful images were obtained with the dEPID using typical treatment field doses. The dEPID is a water-equivalent dosimeter that can be implemented with minimal modifications to the standard commercial EPID design. The proposed dEPID design greatly simplifies the verification of IMRT dose delivery.
Zavgorodni, S
2004-12-07
Inter-fraction dose fluctuations, which appear as a result of setup errors, organ motion and treatment machine output variations, may influence the radiobiological effect of the treatment even when the total delivered physical dose remains constant. The effect of these inter-fraction dose fluctuations on the biological effective dose (BED) has been investigated. Analytical expressions for the BED accounting for the dose fluctuations have been derived. The concept of biological effective constant dose (BECD) has been introduced. The equivalent constant dose (ECD), representing the constant physical dose that provides the same cell survival fraction as the fluctuating dose, has also been introduced. The dose fluctuations with Gaussian as well as exponential probability density functions were investigated. The values of BECD and ECD calculated analytically were compared with those derived from Monte Carlo modelling. The agreement between Monte Carlo modelled and analytical values was excellent (within 1%) for a range of dose standard deviations (0-100% of the dose) and the number of fractions (2 to 37) used in the comparison. The ECDs have also been calculated for conventional radiotherapy fields. The analytical expression for the BECD shows that BECD increases linearly with the variance of the dose. The effect is relatively small, and in the flat regions of the field it results in less than 1% increase of ECD. In the penumbra region of the 6 MV single radiotherapy beam the ECD exceeded the physical dose by up to 35%, when the standard deviation of combined patient setup/organ motion uncertainty was 5 mm. Equivalently, the ECD field was approximately 2 mm wider than the physical dose field. The difference between ECD and the physical dose is greater for normal tissues than for tumours.
Connectivity ranking of heterogeneous random conductivity models
NASA Astrophysics Data System (ADS)
Rizzo, C. B.; de Barros, F.
2017-12-01
To overcome the challenges associated with hydrogeological data scarcity, the hydraulic conductivity (K) field is often represented by a spatial random process. The state-of-the-art provides several methods to generate 2D or 3D random K-fields, such as the classic multi-Gaussian fields or non-Gaussian fields, training image-based fields and object-based fields. We provide a systematic comparison of these models based on their connectivity. We use the minimum hydraulic resistance as a connectivity measure, which it has been found to be strictly correlated with early time arrival of dissolved contaminants. A computationally efficient graph-based algorithm is employed, allowing a stochastic treatment of the minimum hydraulic resistance through a Monte-Carlo approach and therefore enabling the computation of its uncertainty. The results show the impact of geostatistical parameters on the connectivity for each group of random fields, being able to rank the fields according to their minimum hydraulic resistance.
Osipitan, Omobolanle Adewale; Dille, Johanna Anita
2017-01-01
A fast-spreading weed, kochia (Kochia scoparia), has developed resistance to the widely-used herbicide, glyphosate. Understanding the relationship between the occurrence of glyphosate resistance caused by multiple EPSPS gene copies and kochia fitness may suggest a more effective way of controlling kochia. A study was conducted to assess fitness cost of glyphosate resistance compared to susceptibility in kochia populations at different life history stages, that is rate of seed germination, increase in plant height, days to flowering, biomass accumulation at maturity, and fecundity. Six kochia populations from Scott, Finney, Thomas, Phillips, Wallace, and Wichita counties in western Kansas were characterized for resistance to field-use rate of glyphosate and with an in vivo shikimate accumulation assay. Seed germination was determined in growth chambers at three constant temperatures (5, 10, and 15 C) while vegetative growth and fecundity responses were evaluated in a field study using a target-neighborhood competition design in 2014 and 2015. One target plant from each of the six kochia populations was surrounded by neighboring kochia densities equivalent to 10 (low), 35 (moderate), or 70 (high) kochia plants m−2. In 2015, neighboring corn densities equivalent to 10 and 35 plants m−2 were also evaluated. Treatments were arranged in a randomized complete block design with at least 7 replications. Three kochia populations were classified as glyphosate-resistant (GR) [Scott (SC-R), Finney (FN-R), and Thomas (TH-R)] and three populations were classified as glyphosate-susceptible (GS) [Phillips (PH-S), Wallace (WA-S) and Wichita (WI-S)]. Of the life history stages measured, fitness differences between the GR and GS kochia populations were consistently found in their germination characteristics. The GR kochia showed reduced seed longevity, slower germination rate, and less total germination than the GS kochia. In the field, increases in plant height, biomass accumulation, and fecundity were not clearly different between GR and GS kochia populations (irrespective of neighbor density). Hence, weed management plans should integrate practices that take advantage of the relatively poor germination characteristics of GR kochia. This study suggests that evaluating plant fitness at different life history stages can increase the potential of detecting fitness costs. PMID:28713397
NASA Technical Reports Server (NTRS)
Schmidt, R. F.
1971-01-01
Some results obtained with a digital computer program written at Goddard Space Flight Center to obtain electromagnetic fields scattered by perfectly reflecting surfaces are presented. For purposes of illustration a paraboloidal reflector was illuminated at radio frequencies in the simulation for both receiving and transmitting modes of operation. Fields were computed in the Fresnel and Fraunhofer regions. A dual-reflector system (Cassegrain) was also simulated for the transmitting case, and fields were computed in the Fraunhofer region. Appended results include derivations which show that the vector Kirchhoff-Kottler formulation has an equivalent form requiring only incident magnetic fields as a driving function. Satisfaction of the radiation conditions at infinity by the equivalent form is demonstrated by a conversion from Cartesian to spherical vector operators. A subsequent development presents the formulation by which Fresnel or Fraunhofer patterns are obtainable for dual-reflector systems. A discussion of the time-average Poynting vector is also appended.
Karunaratne, Nicholas
2013-12-01
To compare the accuracy of the Pentacam Holladay equivalent keratometry readings with the IOL Master 500 keratometry in calculating intraocular lens power. Non-randomized, prospective clinical study conducted in private practice. Forty-five consecutive normal patients undergoing cataract surgery. Forty-five consecutive patients had Pentacam equivalent keratometry readings at the 2-, 3 and 4.5-mm corneal zone and IOL Master keratometry measurements prior to cataract surgery. For each Pentacam equivalent keratometry reading zone and IOL Master measurement the difference between the observed and expected refractive error was calculated using the Holladay 2 and Sanders, Retzlaff and Kraff theoretic (SRKT) formulas. Mean keratometric value and mean absolute refractive error. There was a statistically significantly difference between the mean keratometric values of the IOL Master, Pentacam equivalent keratometry reading 2-, 3- and 4.5-mm measurements (P < 0.0001, analysis of variance). There was no statistically significant difference between the mean absolute refraction error for the IOL Master and equivalent keratometry readings 2 mm, 3 mm and 4.5 mm zones for either the Holladay 2 formula (P = 0.14) or SRKT formula (P = 0.47). The lowest mean absolute refraction error for Holladay 2 equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.17 D). The lowest mean absolute refraction error for SRKT equivalent keratometry reading was the 4.5 mm zone (mean 0.25 D ± 0.19 D). Comparing the absolute refraction error of IOL Master and Pentacam equivalent keratometry reading, best agreement was with Holladay 2 and equivalent keratometry reading 4.5 mm, with mean of the difference of 0.02 D and 95% limits of agreement of -0.35 and 0.39 D. The IOL Master keratometry and Pentacam equivalent keratometry reading were not equivalent when used only for corneal power measurements. However, the keratometry measurements of the IOL Master and Pentacam equivalent keratometry reading 4.5 mm may be similarly effective when used in intraocular lens power calculation formulas, following constant optimization. © 2013 Royal Australian and New Zealand College of Ophthalmologists.
Bullinger, Monika; Quitmann, Julia; Silva, Neuza; Rohenkohl, Anja; Chaplin, John E; DeBusk, Kendra; Mimoun, Emmanuelle; Feigerlova, Eva; Herdman, Michael; Sanz, Dolores; Wollmann, Hartmut; Pleil, Andreas; Power, Michael
2014-01-01
Testing cross-cultural equivalence of patient-reported outcomes requires sufficiently large samples per country, which is difficult to achieve in rare endocrine paediatric conditions. We describe a novel approach to cross-cultural testing of the Quality of Life in Short Stature Youth (QoLISSY) questionnaire in five countries by sequentially taking one country out (TOCO) from the total sample and iteratively comparing the resulting psychometric performance. Development of the QoLISSY proceeded from focus group discussions through pilot testing to field testing in 268 short-statured patients and their parents. To explore cross-cultural equivalence, the iterative TOCO technique was used to examine and compare the validity, reliability, and convergence of patient and parent responses on QoLISSY in the field test dataset, and to predict QoLISSY scores from clinical, socio-demographic and psychosocial variables. Validity and reliability indicators were satisfactory for each sample after iteratively omitting one country. Comparisons with the total sample revealed cross-cultural equivalence in internal consistency and construct validity for patients and parents, high inter-rater agreement and a substantial proportion of QoLISSY variance explained by predictors. The TOCO technique is a powerful method to overcome problems of country-specific testing of patient-reported outcome instruments. It provides an empirical support to QoLISSY's cross-cultural equivalence and is recommended for future research.
Spherical-earth Gravity and Magnetic Anomaly Modeling by Gauss-legendre Quadrature Integration
NASA Technical Reports Server (NTRS)
Vonfrese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J. (Principal Investigator)
1981-01-01
The anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical Earth for an arbitrary body represented by an equivalent point source distribution of gravity poles or magnetic dipoles were calculated. The distribution of equivalent point sources was determined directly from the coordinate limits of the source volume. Variable integration limits for an arbitrarily shaped body are derived from interpolation of points which approximate the body's surface envelope. The versatility of the method is enhanced by the ability to treat physical property variations within the source volume and to consider variable magnetic fields over the source and observation surface. A number of examples verify and illustrate the capabilities of the technique, including preliminary modeling of potential field signatures for Mississippi embayment crustal structure at satellite elevations.
NASA Astrophysics Data System (ADS)
Zhang, C.; Feng, T.; Raabe, N.; Rottke, H.
2018-02-01
Strong-field ionization (SFI) of the homonuclear noble gas dimer Xe2 is investigated and compared with SFI of the Xe atom and of the ArXe heteronuclear dimer by using ultrashort Ti:sapphire laser pulses and photoelectron momentum spectroscopy. The large separation of the two nuclei of the dimer allows the study of two-equivalent-center interference effects on the photoelectron momentum distribution. Comparing the experimental results with a new model calculation, which is based on the strong-field approximation, actually reveals the influence of interference. Moreover, the comparison indicates that the presence of closely spaced gerade and ungerade electronic state pairs of the Xe2 + ion at the Xe2 ionization threshold, which are strongly dipole coupled, affects the photoelectron momentum distribution.
A study of surface dosimetry for breast cancer radiotherapy treatments using Gafchromic EBT2 film
Hill, Robin F.; Whitaker, May; Kim, Jung‐Ha; Kuncic, Zdenka
2012-01-01
The present study quantified surface doses on several rectangular phantom setups and on curved surface phantoms for a 6 MV photon field using the Attix parallel‐plate chamber and Gafchromic EBT2 film. For the rectangular phantom setups, the surface doses on a homogenous water equivalent phantom and a water equivalent phantom with 60 mm thick lung equivalent material were measured. The measurement on the homogenous phantom setup showed consistency in surface and near‐surface doses between an open field and enhanced dynamic wedge (EDW) fields, whereas physical wedged fields showed small differences. Surface dose measurements made using the EBT2 film showed good agreement with results of the Attix chamber and results obtained in previous studies which used other dosimeters within the measurement uncertainty of 3.3%. The surface dose measurements on the phantom setup with lung equivalent material showed a small increase without bolus and up to 6.9% increase with bolus simulating the increase of chest wall thickness. Surface doses on the cylindrical CT phantom and customized Perspex chest phantom were measured using the EBT2 film with and without bolus. The results indicate the important role of the presence of bolus if the clinical target volume (CTV) is quite close to the surface. Measurements on the cylindrical phantom suggest that surface doses at the oblique positions of 60° and 90° are mainly caused by the lateral scatter from the material inside the phantom. In the case of a single tangential irradiation onto Perspex chest phantom, the distribution of the surface dose with and without bolus materials showed opposing inclination patterns, whereas the dose distribution for two opposed tangential fields gave symmetric dose distribution. This study also demonstrates the suitability of Gafchromic EBT2 film for surface dose measurements in megavoltage photon beams. PACS number: 87.53.Bn PMID:22584169
NASA Astrophysics Data System (ADS)
Wang, Yao; Yang, Zailin; Zhang, Jianwei; Yang, Yong
2017-10-01
Based on the governing equations and the equivalent models, we propose an equivalent transformation relationships between a plane wave in a one-dimensional medium and a spherical wave in globular geometry with radially inhomogeneous properties. These equivalent relationships can help us to obtain the analytical solutions of the elastodynamic issues in an inhomogeneous medium. The physical essence of the presented equivalent transformations is the equivalent relationships between the geometry and the material properties. It indicates that the spherical wave problem in globular geometry can be transformed into the plane wave problem in the bar with variable property fields, and its inverse transformation is valid as well. Four different examples of wave motion problems in the inhomogeneous media are solved based on the presented equivalent relationships. We obtain two basic analytical solution forms in Examples I and II, investigate the reflection behavior of inhomogeneous half-space in Example III, and exhibit a special inhomogeneity in Example IV, which can keep the traveling spherical wave in constant amplitude. This study implies that our idea makes solving the associated problem easier.
Field-antifield and BFV formalisms for quadratic systems with open gauge algebras
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nirov, K.S.; Razumov, A.V.
1992-09-20
In this paper the Lagrangian field-antifield (BV) and Hamiltonian (BFV) BRST formalisms for the general quadratic systems with open gauge algebra are considered. The equivalence between the Lagrangian and Hamiltonian formalisms is proven.
NASA Astrophysics Data System (ADS)
Shore, R. M.; Freeman, M. P.; Gjerloev, J. W.
2018-01-01
We apply the method of data-interpolating empirical orthogonal functions (EOFs) to ground-based magnetic vector data from the SuperMAG archive to produce a series of month length reanalyses of the surface external and induced magnetic field (SEIMF) in 110,000 km2 equal-area bins over the entire northern polar region at 5 min cadence over solar cycle 23, from 1997.0 to 2009.0. Each EOF reanalysis also decomposes the measured SEIMF variation into a hierarchy of spatiotemporal patterns which are ordered by their contribution to the monthly magnetic field variance. We find that the leading EOF patterns can each be (subjectively) interpreted as well-known SEIMF systems or their equivalent current systems. The relationship of the equivalent currents to the true current flow is not investigated. We track the leading SEIMF or equivalent current systems of similar type by intermonthly spatial correlation and apply graph theory to (objectively) group their appearance and relative importance throughout a solar cycle, revealing seasonal and solar cycle variation. In this way, we identify the spatiotemporal patterns that maximally contribute to SEIMF variability over a solar cycle. We propose this combination of EOF and graph theory as a powerful method for objectively defining and investigating the structure and variability of the SEIMF or their equivalent ionospheric currents for use in both geomagnetism and space weather applications. It is demonstrated here on solar cycle 23 but is extendable to any epoch with sufficient data coverage.
Semantic relatedness for evaluation of course equivalencies
NASA Astrophysics Data System (ADS)
Yang, Beibei
Semantic relatedness, or its inverse, semantic distance, measures the degree of closeness between two pieces of text determined by their meaning. Related work typically measures semantics based on a sparse knowledge base such as WordNet or Cyc that requires intensive manual efforts to build and maintain. Other work is based on a corpus such as the Brown corpus, or more recently, Wikipedia. This dissertation proposes two approaches to applying semantic relatedness to the problem of suggesting transfer course equivalencies. Two course descriptions are given as input to feed the proposed algorithms, which output a value that can be used to help determine if the courses are equivalent. The first proposed approach uses traditional knowledge sources such as WordNet and corpora for courses from multiple fields of study. The second approach uses Wikipedia, the openly-editable encyclopedia, and it focuses on courses from a technical field such as Computer Science. This work shows that it is promising to adapt semantic relatedness to the education field for matching equivalencies between transfer courses. A semantic relatedness measure using traditional knowledge sources such as WordNet performs relatively well on non-technical courses. However, due to the "knowledge acquisition bottleneck," such a resource is not ideal for technical courses, which use an extensive and growing set of technical terms. To address the problem, this work proposes a Wikipedia-based approach which is later shown to be more correlated to human judgment compared to previous work.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Novaes, Marcel
2015-06-15
We consider the statistics of time delay in a chaotic cavity having M open channels, in the absence of time-reversal invariance. In the random matrix theory approach, we compute the average value of polynomial functions of the time delay matrix Q = − iħS{sup †}dS/dE, where S is the scattering matrix. Our results do not assume M to be large. In a companion paper, we develop a semiclassical approximation to S-matrix correlation functions, from which the statistics of Q can also be derived. Together, these papers contribute to establishing the conjectured equivalence between the random matrix and the semiclassical approaches.
Nonlinear random response prediction using MSC/NASTRAN
NASA Technical Reports Server (NTRS)
Robinson, J. H.; Chiang, C. K.; Rizzi, S. A.
1993-01-01
An equivalent linearization technique was incorporated into MSC/NASTRAN to predict the nonlinear random response of structures by means of Direct Matrix Abstract Programming (DMAP) modifications and inclusion of the nonlinear differential stiffness module inside the iteration loop. An iterative process was used to determine the rms displacements. Numerical results obtained for validation on simple plates and beams are in good agreement with existing solutions in both the linear and linearized regions. The versatility of the implementation will enable the analyst to determine the nonlinear random responses for complex structures under combined loads. The thermo-acoustic response of a hexagonal thermal protection system panel is used to highlight some of the features of the program.
Probabilistic analysis of structures involving random stress-strain behavior
NASA Technical Reports Server (NTRS)
Millwater, H. R.; Thacker, B. H.; Harren, S. V.
1991-01-01
The present methodology for analysis of structures with random stress strain behavior characterizes the uniaxial stress-strain curve in terms of (1) elastic modulus, (2) engineering stress at initial yield, (3) initial plastic-hardening slope, (4) engineering stress at point of ultimate load, and (5) engineering strain at point of ultimate load. The methodology is incorporated into the Numerical Evaluation of Stochastic Structures Under Stress code for probabilistic structural analysis. The illustrative problem of a thick cylinder under internal pressure, where both the internal pressure and the stress-strain curve are random, is addressed by means of the code. The response value is the cumulative distribution function of the equivalent plastic strain at the inner radius.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chung, Moses; Gilson, Erik P.; Davidson, Ronald C.
2009-04-10
A random noise-induced beam degradation that can affect intense beam transport over long propagation distances has been experimentally studied by making use of the transverse beam dynamics equivalence between an alternating-gradient (AG) focusing system and a linear Paul trap system. For the present studies, machine imperfections in the quadrupole focusing lattice are considered, which are emulated by adding small random noise on the voltage waveform of the quadrupole electrodes in the Paul trap. It is observed that externally driven noise continuously produces a nonthermal tail of trapped ions, and increases the transverse emittance almost linearly with the duration of themore » noise.« less
Pázmándi, Tamás; Deme, Sándor; Láng, Edit
2006-01-01
One of the many risks of long-duration space flights is the excessive exposure to cosmic radiation, which has great importance particularly during solar flares and higher sun activity. Monitoring of the cosmic radiation on board space vehicles is carried out on the basis of wide international co-operation. Since space radiation consists mainly of charged heavy particles (protons, alpha and heavier particles), the equivalent dose differs significantly from the absorbed dose. A radiation weighting factor (w(R)) is used to convert absorbed dose (Gy) to equivalent dose (Sv). w(R) is a function of the linear energy transfer of the radiation. Recently used equipment is suitable for measuring certain radiation field parameters changing in space and over time, so a combination of different measurements and calculations is required to characterise the radiation field in terms of dose equivalent. The objectives of this project are to develop and manufacture a three-axis silicon detector telescope, called Tritel, and to develop software for data evaluation of the measured energy deposition spectra. The device will be able to determine absorbed dose and dose equivalent of the space radiation.
Unreliable Retrial Queues in a Random Environment
2007-09-01
equivalent to the stochasticity of the matrix Ĝ. It is generally known from Perron - Frobenius theory that a given square ma- trix M is stochastic if and...only if its maximum positive eigenvalue (i.e., its Perron eigenvalue) sp(M) is equal to unity. A simple analytical condition that guarantees the
40 CFR 86.1823-08 - Durability demonstration procedures for exhaust emissions.
Code of Federal Regulations, 2012 CFR
2012-07-01
... delivers the appropriate exhaust flow, exhaust constituents, and exhaust temperature to the face of the... vehicles. (2) This data set must consist of randomly procured vehicles from actual customer use. The... equivalency factor. (C) The manufacturer must submit an analysis which evaluates whether the durability...
ERIC Educational Resources Information Center
Zelman, Diane C.; And Others
1992-01-01
Randomly assigned smokers (n=126) to six-session smoking cessation treatments consisting of skills training or support counseling strategies and nicotine gum or rapid smoking nicotine exposure strategies. Counseling and nicotine strategies were completely crossed; all four combinations resulted in equivalent one-year abstinence rates. Treatments…
ERIC Educational Resources Information Center
Shelton, John L.; Madrazo-Peterson, Rita
1978-01-01
Anxious students were randomly assigned to a wait-list control group; to three groups aided by experienced behavior therapists; or to three groups led by paraprofessionals. Results show paraprofessionals can achieve outcome and maintenance effects equivalent to more rigorously trained professionals. Paraprofessionals can conduct desensitization in…
Electrochemical Positioning of Ordered Nanostructures
2016-04-26
or technology fields : Student Metrics This section only applies to graduating undergraduates supported by this agreement in this reporting period The...funded by this agreement who graduated during this period with a degree in science, mathematics, engineering, or technology fields : The number of...engineering, or technology fields :...... ...... ...... ...... ...... PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: PERCENT_SUPPORTEDNAME FTE
Equivalent circuit model of Ge/Si separate absorption charge multiplication avalanche photodiode
NASA Astrophysics Data System (ADS)
Wang, Wei; Chen, Ting; Yan, Linshu; Bao, Xiaoyuan; Xu, Yuanyuan; Wang, Guang; Wang, Guanyu; Yuan, Jun; Li, Junfeng
2018-03-01
The equivalent circuit model of Ge/Si Separate Absorption Charge Multiplication Avalanche Photodiode (SACM-APD) is proposed. Starting from the carrier rate equations in different regions of device and considering the influences of non-uniform electric field, noise, parasitic effect and some other factors, the equivalent circuit model of SACM-APD device is established, in which the steady-state and transient current voltage characteristics can be described exactly. In addition, the proposed Ge/Si SACM APD equivalent circuit model is embedded in PSpice simulator. The important characteristics of Ge/Si SACM APD such as dark current, frequency response, shot noise are simulated, the simulation results show that the simulation with the proposed model are in good agreement with the experimental results.
Wood, Carly; Angus, Caroline; Pretty, Jules; Sandercock, Gavin; Barton, Jo
2013-01-01
This study assessed whether exercising whilst viewing natural or built scenes affected self-esteem (SE) and mood in adolescents. Twenty-five adolescents participated in three exercise tests on consecutive days. A graded exercise test established the work rate equivalent to 50% heart rate reserve for use in subsequent constant load tests (CLTs). Participants undertook two 15-min CLTs in random order viewing scenes of either natural or built environments. Participants completed Rosenberg's SE scale and the adolescent profile of mood states questionnaire pre- and post-exercise. There was a significant main effect for SE (F(1) = 6.10; P < 0.05) and mood (F(6) = 5.29; P < 0.001) due to exercise, but no effect of viewing different environmental scenes (P > 0.05). Short bouts of moderate physical activity can have a positive impact on SE and mood in adolescents. Future research should incorporate field studies to examine the psychological effects of contact with real environments.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Warren, Laura E.G.; Punglia, Rinaa S.; Wong, Julia S.
2014-11-15
Radiation therapy to the breast following breast conservation surgery has been the standard of care since randomized trials demonstrated equivalent survival compared to mastectomy and improved local control and survival compared to breast conservation surgery alone. Recent controversies regarding adjuvant radiation therapy have included the potential role of additional radiation to the regional lymph nodes. This review summarizes the evolution of regional nodal management focusing on 2 topics: first, the changing paradigm with regard to surgical evaluation of the axilla; second, the role for regional lymph node irradiation and optimal design of treatment fields. Contemporary data reaffirm prior studies showingmore » that complete axillary dissection may not provide additional benefit relative to sentinel lymph node biopsy in select patient populations. Preliminary data also suggest that directed nodal radiation therapy to the supraclavicular and internal mammary lymph nodes may prove beneficial; publication of several studies are awaited to confirm these results and to help define subgroups with the greatest likelihood of benefit.« less
A linear programming approach to max-sum problem: a review.
Werner, Tomás
2007-07-01
The max-sum labeling problem, defined as maximizing a sum of binary (i.e., pairwise) functions of discrete variables, is a general NP-hard optimization problem with many applications, such as computing the MAP configuration of a Markov random field. We review a not widely known approach to the problem, developed by Ukrainian researchers Schlesinger et al. in 1976, and show how it contributes to recent results, most importantly, those on the convex combination of trees and tree-reweighted max-product. In particular, we review Schlesinger et al.'s upper bound on the max-sum criterion, its minimization by equivalent transformations, its relation to the constraint satisfaction problem, the fact that this minimization is dual to a linear programming relaxation of the original problem, and the three kinds of consistency necessary for optimality of the upper bound. We revisit problems with Boolean variables and supermodular problems. We describe two algorithms for decreasing the upper bound. We present an example application for structural image analysis.
Relating the ac complex resistivity of the pinned vortex lattice to its shear modulus
NASA Astrophysics Data System (ADS)
Ong, N. P.; Wu, Hui
1997-07-01
We propose a way to determine the shear rigidity of the pinned vortex lattice in high-purity crystals from the dependence of its complex resistivity ρ⁁ on frequency (ω). The lattice is modeled as an elastic medium pinned by a sparse, random distribution of defects. We relate ρ⁁ to the velocity of the small subset of pinned vortices via the lattice propagator G(R,ω). Measuring ρ⁁ versus ω is equivalent to determining G(R,ω) versus R. The range of G(R,ω) depends sensitively on the shear and tilt moduli. We describe the evaluation of G(R,ω) in two-dimensional (2D) and 3D lattices. The 2D analysis provides a close fit to the frequency dependence of Reρ⁁ measured in an untwinned crystal of YBa2Cu3O7 at 89 K in a field of 0.5 and 1.0 T. We compare our results with earlier models.
Campbell, Rebecca; Pierce, Steven J; Sharma, Dhruv B; Shaw, Jessica; Feeney, Hannah; Nye, Jeffrey; Schelling, Kristin; Fehler-Cabral, Giannina
2017-01-01
A growing number of U.S. cities have large numbers of untested sexual assault kits (SAKs) in police property facilities. Testing older kits and maintaining current case work will be challenging for forensic laboratories, creating a need for more efficient testing methods. We evaluated selective degradation methods for DNA extraction using actual case work from a sample of previously unsubmitted SAKs in Detroit, Michigan. We randomly assigned 350 kits to either standard or selective degradation testing methods and then compared DNA testing rates and CODIS entry rates between the two groups. Continuation-ratio modeling showed no significant differences, indicating that the selective degradation method had no decrement in performance relative to customary methods. Follow-up equivalence tests indicated that CODIS entry rates for the two methods could differ by more than ±5%. Selective degradation methods required less personnel time for testing and scientific review than standard testing. © 2016 American Academy of Forensic Sciences.
Origin of negative resistivity slope in U-based ferromagnets
NASA Astrophysics Data System (ADS)
Havela, L.; Paukov, M.; Buturlim, V.; Tkach, I.; Mašková, S.; Dopita, M.
2018-05-01
Ultra-nanocrystalline UH3-based ferromagnets with TC ≈ 200 K exhibit a flat temperature dependence of electrical resistivity with a negative slope both in the ferromagnetic and paramagnetic range. The ordered state with randomness on atomic scale, equivalent to a non-collinear ferromagnetism, can be affected by magnetic field, supressing the static magnetic disorder, which reduces the resistivity and removes the negative slope. It is deduced that the dynamic magnetic disorder in the paramagnetic state can be conceived as continuation of the static disorder in the ordered state. The experiments, performed for (UH3)0.78Mo0.12Ti0.10, demonstrate that the negative resistivity slope, observed for numerous U-based intermetallics in the paramagnetic state, can be due to the strong disorder effect on resistivity. The resulting weak localization, as a quantum interference effect which increases resistivity, is gradually suppressed by enhanced temperature, contributing by electron-phonon scattering, inelastic in nature and removing the quantum coherence.
NASA Astrophysics Data System (ADS)
Rejiba, F.; Sagnard, F.; Schamper, C.
2011-07-01
Time domain reflectometry (TDR) is a proven, nondestructive method for the measurement of the permittivity and electrical conductivity of soils, using electromagnetic (EM) waves. Standard interpretation of TDR data leads to the estimation of the soil's equivalent electromagnetic properties since the wavelengths associated with the source signal are considerably greater than the microstructure of the soil. The aforementioned approximation tends to hide an important issue: the influence of the microstructure and phase configuration in the generation of a polarized electric field, which is complicated because of the presence of numerous length scales. In this paper, the influence of the microstructural distribution of each phase on the TDR signal has been studied. We propose a two-step EM modeling technique at a microscale range (?): first, we define an equivalent grain including a thin shell of free water, and second, we solve Maxwell's equations over the discretized, statistically distributed triphasic porous medium. Modeling of the TDR probe with the soil sample was performed using a three-dimensional finite difference time domain scheme. The effectiveness of this hybrid homogenization approach is tested on unsaturated Nemours sand with narrow granulometric fractions. The comparisons made between numerical and experimental results are promising, despite significant assumptions concerning (1) the TDR probe head and the coaxial cable and (2) the assumed effective medium theory homogenization associated with the electromagnetic processes arising locally between the liquid and solid phases at the grain scale.
Likos, Christos N; Mladek, Bianca M; Gottwald, Dieter; Kahl, Gerhard
2007-06-14
We demonstrate the accuracy of the hypernetted chain closure and of the mean-field approximation for the calculation of the fluid-state properties of systems interacting by means of bounded and positive pair potentials with oscillating Fourier transforms. Subsequently, we prove the validity of a bilinear, random-phase density functional for arbitrary inhomogeneous phases of the same systems. On the basis of this functional, we calculate analytically the freezing parameters of the latter. We demonstrate explicitly that the stable crystals feature a lattice constant that is independent of density and whose value is dictated by the position of the negative minimum of the Fourier transform of the pair potential. This property is equivalent with the existence of clusters, whose population scales proportionally to the density. We establish that regardless of the form of the interaction potential and of the location on the freezing line, all cluster crystals have a universal Lindemann ratio Lf=0.189 at freezing. We further make an explicit link between the aforementioned density functional and the harmonic theory of crystals. This allows us to establish an equivalence between the emergence of clusters and the existence of negative Fourier components of the interaction potential. Finally, we make a connection between the class of models at hand and the system of infinite-dimensional hard spheres, when the limits of interaction steepness and space dimension are both taken to infinity in a particularly described fashion.
Granger Causality and Transfer Entropy Are Equivalent for Gaussian Variables
NASA Astrophysics Data System (ADS)
Barnett, Lionel; Barrett, Adam B.; Seth, Anil K.
2009-12-01
Granger causality is a statistical notion of causal influence based on prediction via vector autoregression. Developed originally in the field of econometrics, it has since found application in a broader arena, particularly in neuroscience. More recently transfer entropy, an information-theoretic measure of time-directed information transfer between jointly dependent processes, has gained traction in a similarly wide field. While it has been recognized that the two concepts must be related, the exact relationship has until now not been formally described. Here we show that for Gaussian variables, Granger causality and transfer entropy are entirely equivalent, thus bridging autoregressive and information-theoretic approaches to data-driven causal inference.
NASA Technical Reports Server (NTRS)
Ruder, M. E.; Alexander, S. S.
1985-01-01
The MAGSAT equivalent-source anomaly field evaluated at 325 km altitude depicts a prominent anomaly centered over southeast Georgia, which is adjacent to the high-amplitude positive Kentucky anomaly. To overcome the satellite resolution constraint in studying this anomaly, conventional geophysical data were included in analysis: Bouguer gravity, seismic reflection and refraction, aeromagnetic, and in-situ stress-strain measurements. This integrated geophysical approach, infers more specifically the nature and extent of the crustal and/or lithospheric source of the Georgia MAGSAT anomaly. Physical properties and tectonic evolution of the area are all important in the interpretation.
Balancing anisotropic curvature with gauge fields in a class of shear-free cosmological models
NASA Astrophysics Data System (ADS)
Thorsrud, Mikjel
2018-05-01
We present a complete list of general relativistic shear-free solutions in a class of anisotropic, spatially homogeneous and orthogonal cosmological models containing a collection of n independent p-form gauge fields, where p\\in\\{0, 1, 2, 3\\} , in addition to standard ΛCDM matter fields modelled as perfect fluids. Here a (collection of) gauge field(s) balances anisotropic spatial curvature on the right-hand side of the shear propagation equation. The result is a class of solutions dynamically equivalent to standard FLRW cosmologies, with an effective curvature constant Keff that depends both on spatial curvature and the energy density of the gauge field(s). In the case of a single gauge field (n = 1) we show that the only spacetimes that admit such solutions are the LRS Bianchi type III, Bianchi type VI0 and Kantowski–Sachs metric, which are dynamically equivalent to open (Keff<0 ), flat (Keff=0 ) and closed (Keff>0 ) FLRW models, respectively. With a collection of gauge fields (n > 1) also Bianchi type II admits a shear-free solution (Keff>0 ). We identify the LRS Bianchi type III solution to be the unique shear-free solution with a gauge field Hamiltonian bounded from below in the entire class of models.
Constitutive Modeling of Nanotube/Polymer Composites with Various Nanotube Orientations
NASA Technical Reports Server (NTRS)
Odegard, Gregory M.; Gates, Thomas S.
2002-01-01
In this study, a technique has been proposed for developing constitutive models for polymer composite systems reinforced with single-walled carbon nanotubes (SWNT) with various orientations with respect to the bulk material coordinates. A nanotube, the local polymer adjacent to the nanotube, and the nanotube/polymer interface have been modeled as an equivalent-continuum fiber by using an equivalent-continuum modeling method. The equivalent-continuum fiber accounts for the local molecular structure and bonding information and serves as a means for incorporating micromechanical analyses for the prediction of bulk mechanical properties of SWNT/polymer composite. As an example, the proposed approach is used for the constitutive modeling of a SWNT/LaRC-SI (with a PmPV interface) composite system, with aligned nanotubes, three-dimensionally randomly oriented nanotubes, and nanotubes oriented with varying degrees of axisymmetry. It is shown that the Young s modulus is highly dependent on the SWNT orientation distribution.
Inflation with a graceful exit in a random landscape
NASA Astrophysics Data System (ADS)
Pedro, F. G.; Westphal, A.
2017-03-01
We develop a stochastic description of small-field inflationary histories with a graceful exit in a random potential whose Hessian is a Gaussian random matrix as a model of the unstructured part of the string landscape. The dynamical evolution in such a random potential from a small-field inflation region towards a viable late-time de Sitter (dS) minimum maps to the dynamics of Dyson Brownian motion describing the relaxation of non-equilibrium eigenvalue spectra in random matrix theory. We analytically compute the relaxation probability in a saddle point approximation of the partition function of the eigenvalue distribution of the Wigner ensemble describing the mass matrices of the critical points. When applied to small-field inflation in the landscape, this leads to an exponentially strong bias against small-field ranges and an upper bound N ≪ 10 on the number of light fields N participating during inflation from the non-observation of negative spatial curvature.
SU-F-T-408: On the Determination of Equivalent Squares for Rectangular Small MV Photon Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sauer, OA; Wegener, S; Exner, F
Purpose: It is common practice to tabulate dosimetric data like output factors, scatter factors and detector signal correction factors for a set of square fields. In order to get the data for an arbitrary field, it is mapped to an equivalent square, having the same scatter as the field of interest. For rectangular fields both, tabulated data and empiric formula exist. We tested the applicability of such rules for very small fields. Methods: Using the Monte-Carlo method (EGSnrc-doseRZ), the dose to a point in 10cm depth in water was calculated for cylindrical impinging fluence distributions. Radii were from 0.5mm tomore » 11.5mm with 1mm thickness of the rings. Different photon energies were investigated. With these data a matrix was constructed assigning the amount of dose to the field center to each matrix element. By summing up the elements belonging to a certain field, the dose for an arbitrary point in 10cm depth could be determined. This was done for rectangles up to 21mm side length. Comparing the dose to square field results, equivalent squares could be assigned. The results were compared to using the geometrical mean and the 4Xperimeter/area rule. Results: For side length differences less than 2mm, the difference between all methods was in general less than 0.2mm. For more elongated fields, relevant differences of more than 1mm and up to 3mm for the fields investigated occurred. The mean square side length calculated from both empiric formulas fitted much better, deviating hardly more than 1mm and for the very elongated fields only. Conclusion: For small rectangular photon fields, deviating only moderately from square both investigated empiric methods are sufficiently accurate. As the deviations often differ regarding their sign, using the mean improves the accuracy and the useable elongation range. For ratios larger than 2, Monte-Carlo generated data are recommended. SW is funded by Deutsche Forschungsgemeinschaft (SA481/10-1)« less
Snow parameters from Nimbus-6 electrically scanned microwave radiometer. [(ESMR-6)
NASA Technical Reports Server (NTRS)
Abrams, G.; Edgerton, A. T.
1977-01-01
Two sites in Canada were selected for detailed analysis of the ESMR-6/ snow relationships. Data were analyzed for February 1976 for site 1 and January, February and March 1976 for site 2. Snowpack water equivalents were less than 4.5 inches for site 1 and, depending on the month, were between 2.9 and 14.5 inches for site 2. A statistically significant relationship was found between ESMR-6 measurements and snowpack water equivalents for the Site 2 February and March data. Associated analysis findings presented are the effects of random measurement errors, snow site physiolography, and weather conditions on the ESMR-6/snow relationship.
Mechanical equivalent of quantum heat engines.
Arnaud, Jacques; Chusseau, Laurent; Philippe, Fabrice
2008-06-01
Quantum heat engines employ as working agents multilevel systems instead of classical gases. We show that under some conditions quantum heat engines are equivalent to a series of reservoirs at different altitudes containing balls of various weights. A cycle consists of picking up at random a ball from one reservoir and carrying it to the next, thereby performing or absorbing some work. In particular, quantum heat engines, employing two-level atoms as working agents, are modeled by reservoirs containing balls of weight 0 or 1. The mechanical model helps us prove that the maximum efficiency of quantum heat engines is the Carnot efficiency. Heat pumps and negative temperatures are considered.
NASA Astrophysics Data System (ADS)
Graham, Wendy D.; Tankersley, Claude D.
1994-05-01
Stochastic methods are used to analyze two-dimensional steady groundwater flow subject to spatially variable recharge and transmissivity. Approximate partial differential equations are developed for the covariances and cross-covariances between the random head, transmissivity and recharge fields. Closed-form solutions of these equations are obtained using Fourier transform techniques. The resulting covariances and cross-covariances can be incorporated into a Bayesian conditioning procedure which provides optimal estimates of the recharge, transmissivity and head fields given available measurements of any or all of these random fields. Results show that head measurements contain valuable information for estimating the random recharge field. However, when recharge is treated as a spatially variable random field, the value of head measurements for estimating the transmissivity field can be reduced considerably. In a companion paper, the method is applied to a case study of the Upper Floridan Aquifer in NE Florida.
The random field Blume-Capel model revisited
NASA Astrophysics Data System (ADS)
Santos, P. V.; da Costa, F. A.; de Araújo, J. M.
2018-04-01
We have revisited the mean-field treatment for the Blume-Capel model under the presence of a discrete random magnetic field as introduced by Kaufman and Kanner (1990). The magnetic field (H) versus temperature (T) phase diagrams for given values of the crystal field D were recovered in accordance to Kaufman and Kanner original work. However, our main goal in the present work was to investigate the distinct structures of the crystal field versus temperature phase diagrams as the random magnetic field is varied because similar models have presented reentrant phenomenon due to randomness. Following previous works we have classified the distinct phase diagrams according to five different topologies. The topological structure of the phase diagrams is maintained for both H - T and D - T cases. Although the phase diagrams exhibit a richness of multicritical phenomena we did not found any reentrant effect as have been seen in similar models.
The variability of atmospheric equivalent temperature for radar altimeter range correction
NASA Technical Reports Server (NTRS)
Liu, W. Timothy; Mock, Donald
1990-01-01
Two sets of data were used to test the validity of the presently used approximation for radar altimeter range correction due to atmospheric water vapor. The approximation includes an assumption of constant atmospheric equivalent temperature. The first data set includes monthly, three-dimensional, gridded temperature and humidity fields over global oceans for a 10-year period, and the second is comprised of daily or semidaily rawinsonde data at 17 island stations for a 7-year period. It is found that the standard method underestimates the variability of the equivalent temperature, and the approximation could introduce errors of 2 cm for monthly means. The equivalent temperature is found to have a strong meridional gradient, and the highest temporal variabilities are found over western boundary currents. The study affirms that the atmospheric water vapor is a good predictor for both the equivalent temperature and the range correction. A relation is proposed to reduce the error.
Bretschneider, Wiebke; Elger, Bernice Simone
2014-09-01
Health care in prison and particularly the health care of older prisoners are increasingly important topics due to the growth of the ageing prisoner population. The aim of this paper is to gain insight into the approaches used in the provision of equivalent health care to ageing prisoners and to confront the intuitive definition of equivalent care and the practical and ethical challenges that have been experienced by individuals working in this field. Forty interviews took place with experts working in the prison setting from three Western European countries to discover their views on prison health care. Experts indicated that the provision of equivalent care in prison is difficult mostly due to four factors: variability of care in different prisons, gatekeeper systems, lack of personnel, and delays in providing access. This lack of equivalence can be fixed by allocating adequate budgets and developing standards for health care in prison.
Equivalent radiation source of 3D package for electromagnetic characteristics analysis
NASA Astrophysics Data System (ADS)
Li, Jun; Wei, Xingchang; Shu, Yufei
2017-10-01
An equivalent radiation source method is proposed to characterize electromagnetic emission and interference of complex three dimensional integrated circuits (IC) in this paper. The method utilizes amplitude-only near-field scanning data to reconstruct an equivalent magnetic dipole array, and the differential evolution optimization algorithm is proposed to extract the locations, orientation and moments of those dipoles. By importing the equivalent dipoles model into a 3D full-wave simulator together with the victim circuit model, the electromagnetic interference issues in mixed RF/digital systems can be well predicted. A commercial IC is used to validate the accuracy and efficiency of this proposed method. The coupled power at the victim antenna port calculated by the equivalent radiation source is compared with the measured data. Good consistency is obtained which confirms the validity and efficiency of the method. Project supported by the National Nature Science Foundation of China (No. 61274110).
Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao
2018-01-01
Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts. PMID:29324686
Deng, Ruixiang; Li, Meiling; Muneer, Badar; Zhu, Qi; Shi, Zaiying; Song, Lixin; Zhang, Tao
2018-01-11
Optically Transparent Microwave Metamaterial Absorber (OTMMA) is of significant use in both civil and military field. In this paper, equivalent circuit model is adopted as springboard to navigate the design of OTMMA. The physical model and absorption mechanisms of ideal lightweight ultrathin OTMMA are comprehensively researched. Both the theoretical value of equivalent resistance and the quantitative relation between the equivalent inductance and equivalent capacitance are derived for design. Frequency-dependent characteristics of theoretical equivalent resistance are also investigated. Based on these theoretical works, an effective and controllable design approach is proposed. To validate the approach, a wideband OTMMA is designed, fabricated, analyzed and tested. The results reveal that high absorption more than 90% can be achieved in the whole 6~18 GHz band. The fabricated OTMMA also has an optical transparency up to 78% at 600 nm and is much thinner and lighter than its counterparts.
Calibration of a mosfet detection system for 6-MV in vivo dosimetry.
Scalchi, P; Francescon, P
1998-03-01
Metal oxide semiconductor field-effect transistor (MOSFET) detectors were calibrated to perform in vivo dosimetry during 6-MV treatments, both in normal setup and total body irradiation (TBI) conditions. MOSFET water-equivalent depth, dependence of the calibration factors (CFs) on the field sizes, MOSFET orientation, bias supply, accumulated dose, incidence angle, temperature, and spoiler-skin distance in TBI setup were investigated. MOSFET reproducibility was verified. The correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was studied. MOSFET midplane dosimetry in TBI setup was compared with thermoluminescent dosimetry in an anthropomorphic phantom. By using ionization chamber measurements, the TBI midplane dosimetry was also verified in the presence of cork as a lung substitute. The water-equivalent depth of the MOSFET is about 0.8 mm or 1.8 mm, depending on which sensor side faces the beam. The field size also affects this quantity; Monte Carlo simulations allow driving this behavior by changes in the contaminating electron mean energy. The CFs vary linearly as a function of the square field side, for fields ranging from 5 x 5 to 30 x 30 cm2. In TBI setup, varying the spoiler-skin distance between 5 mm and 10 cm affects the CFs within 5%. The MOSFET reproducibility is about 3% (2 SD) for the doses normally delivered to the patients. The effect of the accumulated dose on the sensor response is negligible. For beam incidence ranging from 0 degrees to 90 degrees, the MOSFET response varies within 7%. No monotonic correlation between the sensor response and the temperature is apparent. Good correlation between the water-equivalent midplane depth and the ratio of the exit MOSFET readout divided by the entrance MOSFET readout was found (the correlation coefficient is about 1). The MOSFET midplane dosimetry relevant to the anthropomorphic phantom irradiation is in agreement with TLD dosimetry within 5%. Ionization chamber and MOSFET midplane dosimetry in inhomogeneous phantoms are in agreement within 2%. MOSFET characteristics are suitable for the in vivo dosimetry relevant to 6-MV treatments, both in normal and TBI setup. The TBI midplane dosimetry using MOSFETs is valid also in the presence of the lung, which is the most critical organ, and allows verifying that calculation of the lung attenuator thicknesses based only on the density is not correct. Our MOSFET dosimetry system can be used also to determine the surface dose by using the water-equivalent depth and extrapolation methods. This procedure depends on the field size used.
A New Algorithm with Plane Waves and Wavelets for Random Velocity Fields with Many Spatial Scales
NASA Astrophysics Data System (ADS)
Elliott, Frank W.; Majda, Andrew J.
1995-03-01
A new Monte Carlo algorithm for constructing and sampling stationary isotropic Gaussian random fields with power-law energy spectrum, infrared divergence, and fractal self-similar scaling is developed here. The theoretical basis for this algorithm involves the fact that such a random field is well approximated by a superposition of random one-dimensional plane waves involving a fixed finite number of directions. In general each one-dimensional plane wave is the sum of a random shear layer and a random acoustical wave. These one-dimensional random plane waves are then simulated by a wavelet Monte Carlo method for a single space variable developed recently by the authors. The computational results reported in this paper demonstrate remarkable low variance and economical representation of such Gaussian random fields through this new algorithm. In particular, the velocity structure function for an imcorepressible isotropic Gaussian random field in two space dimensions with the Kolmogoroff spectrum can be simulated accurately over 12 decades with only 100 realizations of the algorithm with the scaling exponent accurate to 1.1% and the constant prefactor accurate to 6%; in fact, the exponent of the velocity structure function can be computed over 12 decades within 3.3% with only 10 realizations. Furthermore, only 46,592 active computational elements are utilized in each realization to achieve these results for 12 decades of scaling behavior.
SMERFS: Stochastic Markov Evaluation of Random Fields on the Sphere
NASA Astrophysics Data System (ADS)
Creasey, Peter; Lang, Annika
2018-04-01
SMERFS (Stochastic Markov Evaluation of Random Fields on the Sphere) creates large realizations of random fields on the sphere. It uses a fast algorithm based on Markov properties and fast Fourier Transforms in 1d that generates samples on an n X n grid in O(n2 log n) and efficiently derives the necessary conditional covariance matrices.
Rai, K.S.F.
1994-01-11
A device for measuring dose equivalents in neutron radiation fields is described. The device includes nested symmetrical hemispheres (forming spheres) of different neutron moderating materials that allow the measurement of dose equivalents from 0.025 eV to past 1 GeV. The layers of moderating material surround a spherical neutron counter. The neutron counter is connected by an electrical cable to an electrical sensing means which interprets the signal from the neutron counter in the center of the moderating spheres. The spherical shape of the device allows for accurate measurement of dose equivalents regardless of its positioning. 2 figures.
Computational micromechanics of woven composites
NASA Technical Reports Server (NTRS)
Hopkins, Dale A.; Saigal, Sunil; Zeng, Xiaogang
1991-01-01
The bounds on the equivalent elastic material properties of a composite are presently addressed by a unified energy approach which is valid for both unidirectional and 2D and 3D woven composites. The unit cell considered is assumed to consist, first, of the actual composite arrangement of the fibers and matrix material, and then, of an equivalent pseudohomogeneous material. Equating the strain energies due to the two arrangements yields an estimate of the upper bound for the material equivalent properties; successive increases in the order of displacement field that is assumed in the composite arrangement will successively produce improved upper bound estimates.
NASA Astrophysics Data System (ADS)
Agosteo, S.; Bedogni, R.; Caresana, M.; Charitonidis, N.; Chiti, M.; Esposito, A.; Ferrarini, M.; Severino, C.; Silari, M.
2012-12-01
The accurate determination of the ambient dose equivalent in the mixed neutron-photon fields encountered around high-energy particle accelerators still represents a challenging task. The main complexity arises from the extreme variability of the neutron energy, which spans over 10 orders of magnitude or more. Operational survey instruments, which response function attempts to mimic the fluence-to-ambient dose equivalent conversion coefficient up to GeV neutrons, are available on the market, but their response is not fully reliable over the entire energy range. Extended range rem counters (ERRC) do not require the exact knowledge of the energy distribution of the neutron field and the calibration can be done with a source spectrum. If the actual neutron field has an energy distribution different from the calibration spectrum, the measurement is affected by an added uncertainty related to the partial overlap of the fluence-to-ambient dose equivalent conversion curve and the response function. For this reason their operational use should always be preceded by an "in-field" calibration, i.e. a calibration made against a reference instrument exposed in the same field where the survey-meter will be employed. In practice the extended-range Bonner Sphere Spectrometer (ERBSS) is the only device which can serve as reference instrument in these fields, because of its wide energy range and the possibility to assess the neutron fluence and the ambient dose equivalent (H*(10)) values with the appropriate accuracy. Nevertheless, the experience gained by a number of experimental groups suggests that mandatory conditions for obtaining accurate results in workplaces are: (1) the use of a well-established response matrix, thus implying validation campaigns in reference monochromatic neutrons fields, (2) the expert and critical use of suitable unfolding codes, and (3) the performance test of the whole system (experimental set-up, elaboration and unfolding procedures) in a well controlled workplace field. The CERF (CERN-EU high-energy reference field) facility is a unique example of such a field, where a number of experimental campaigns and Monte Carlo simulations have been performed over the past years. With the aim of performing this kind of workplace performance test, four different ERBSS with different degrees of validation, operated by three groups (CERN, INFN-LNF and Politecnico of Milano), were exposed in two fixed positions at CERF. Using different unfolding codes (MAXED, GRAVEL, FRUIT and FRUIT SGM), the experimental data were analyzed to provide the neutron spectra and the related dosimetric quantities. The results allow assessing the overall performance of each ERBSS and of the unfolding codes, as well as comparing the performance of three ERRCs when used in a neutron field with energy distribution different from the calibration spectrum.
Pitch chroma discrimination, generalization, and transfer tests of octave equivalence in humans.
Hoeschele, Marisa; Weisman, Ronald G; Sturdy, Christopher B
2012-11-01
Octave equivalence occurs when notes separated by an octave (a doubling in frequency) are judged as being perceptually similar. Considerable evidence points to the importance of the octave in music and speech. Yet, experimental demonstration of octave equivalence has been problematic. Using go/no-go operant discrimination and generalization, we studied octave equivalence in humans. In Experiment 1, we found that a procedure that failed to show octave equivalence in European starlings also failed in humans. In Experiment 2, we modified the procedure to control for the effects of pitch height perception by training participants in Octave 4 and testing in Octave 5. We found that the pattern of responding developed by discrimination training in Octave 4 generalized to Octave 5. We replicated and extended our findings in Experiment 3 by adding a transfer phase: Participants were trained with either the same or a reversed pattern of rewards in Octave 5. Participants transferred easily to the same pattern of reward in Octave 5 but struggled to learn the reversed pattern. We provided minimal instruction, presented no ordered sequences of notes, and used only sine-wave tones, but participants nonetheless constructed pitch chroma information from randomly ordered sequences of notes. Training in music weakly hindered octave generalization but moderately facilitated both positive and negative transfer.
Using the Coronal Evolution to Successfully Forward Model CMEs' In Situ Magnetic Profiles
NASA Astrophysics Data System (ADS)
Kay, C.; Gopalswamy, N.
2017-12-01
Predicting the effects of a coronal mass ejection (CME) impact requires knowing if impact will occur, which part of the CME impacts, and its magnetic properties. We explore the relation between CME deflections and rotations, which change the position and orientation of a CME, and the resulting magnetic profiles at 1 AU. For 45 STEREO-era, Earth-impacting CMEs, we determine the solar source of each CME, reconstruct its coronal position and orientation, and perform a ForeCAT (Forecasting a CME's Altered Trajectory) simulation of the coronal deflection and rotation. From the reconstructed and modeled CME deflections and rotations, we determine the solar cycle variation and correlations with CME properties. We assume no evolution between the outer corona and 1 AU and use the ForeCAT results to drive the ForeCAT In situ Data Observer (FIDO) in situ magnetic field model, allowing for comparisons with ACE and Wind observations. We do not attempt to reproduce the arrival time. On average FIDO reproduces the in situ magnetic field for each vector component with an error equivalent to 35% of the average total magnetic field strength when the total modeled magnetic field is scaled to match the average observed value. Random walk best fits distinguish between ForeCAT's ability to determine FIDO's input parameters and the limitations of the simple flux rope model. These best fits reduce the average error to 30%. The FIDO results are sensitive to changes of order a degree in the CME latitude, longitude, and tilt, suggesting that accurate space weather predictions require accurate measurements of a CME's position and orientation.
2011-01-01
Background We present the design, methods and population characteristics of a large community trial that assessed the efficacy of a weekly supplement containing vitamin A or beta-carotene, at recommended dietary levels, in reducing maternal mortality from early gestation through 12 weeks postpartum. We identify challenges faced and report solutions in implementing an intervention trial under low-resource, rural conditions, including the importance of population choice in promoting generalizability, maintaining rigorous data quality control to reduce inter- and intra- worker variation, and optimizing efficiencies in information and resources flow from and to the field. Methods This trial was a double-masked, cluster-randomized, dual intervention, placebo-controlled trial in a contiguous rural area of ~435 sq km with a population of ~650,000 in Gaibandha and Rangpur Districts of Northwestern Bangladesh. Approximately 120,000 married women of reproductive age underwent 5-weekly home surveillance, of whom ~60,000 were detected as pregnant, enrolled into the trial and gave birth to ~44,000 live-born infants. Upon enrollment, at ~ 9 weeks' gestation, pregnant women received a weekly oral supplement containing vitamin A (7000 ug retinol equivalents (RE)), beta-carotene (42 mg, or ~7000 ug RE) or a placebo through 12 weeks postpartum, according to prior randomized allocation of their cluster of residence. Systems described include enlistment and 5-weekly home surveillance for pregnancy based on menstrual history and urine testing, weekly supervised supplementation, periodic risk factor interviews, maternal and infant vital outcome monitoring, birth defect surveillance and clinical/biochemical substudies. Results The primary outcome was pregnancy-related mortality assessed for 3 months following parturition. Secondary outcomes included fetal loss due to miscarriage or stillbirth, infant mortality under three months of age, maternal obstetric and infectious morbidity, infant infectious morbidity, maternal and infant micronutrient status, fetal and infant growth and prematurity, external birth defects and postnatal infant growth to 3 months of age. Conclusion Aspects of study site selection and its "resonance" with national and rural qualities of Bangladesh, the trial's design, methods and allocation group comparability achieved by randomization, field procedures and innovative approaches to solving challenges in trial conduct are described and discussed. This trial is registered with http://Clinicaltrials.gov as protocol NCT00198822. PMID:21510905
Labrique, Alain B; Christian, Parul; Klemm, Rolf D W; Rashid, Mahbubur; Shamim, Abu Ahmed; Massie, Allan; Schulze, Kerry; Hackman, Andre; West, Keith P
2011-04-21
We present the design, methods and population characteristics of a large community trial that assessed the efficacy of a weekly supplement containing vitamin A or beta-carotene, at recommended dietary levels, in reducing maternal mortality from early gestation through 12 weeks postpartum. We identify challenges faced and report solutions in implementing an intervention trial under low-resource, rural conditions, including the importance of population choice in promoting generalizability, maintaining rigorous data quality control to reduce inter- and intra- worker variation, and optimizing efficiencies in information and resources flow from and to the field. This trial was a double-masked, cluster-randomized, dual intervention, placebo-controlled trial in a contiguous rural area of ~435 sq km with a population of ~650,000 in Gaibandha and Rangpur Districts of Northwestern Bangladesh. Approximately 120,000 married women of reproductive age underwent 5-weekly home surveillance, of whom ~60,000 were detected as pregnant, enrolled into the trial and gave birth to ~44,000 live-born infants. Upon enrollment, at ~ 9 weeks' gestation, pregnant women received a weekly oral supplement containing vitamin A (7000 ug retinol equivalents (RE)), beta-carotene (42 mg, or ~7000 ug RE) or a placebo through 12 weeks postpartum, according to prior randomized allocation of their cluster of residence. Systems described include enlistment and 5-weekly home surveillance for pregnancy based on menstrual history and urine testing, weekly supervised supplementation, periodic risk factor interviews, maternal and infant vital outcome monitoring, birth defect surveillance and clinical/biochemical substudies. The primary outcome was pregnancy-related mortality assessed for 3 months following parturition. Secondary outcomes included fetal loss due to miscarriage or stillbirth, infant mortality under three months of age, maternal obstetric and infectious morbidity, infant infectious morbidity, maternal and infant micronutrient status, fetal and infant growth and prematurity, external birth defects and postnatal infant growth to 3 months of age. Aspects of study site selection and its "resonance" with national and rural qualities of Bangladesh, the trial's design, methods and allocation group comparability achieved by randomization, field procedures and innovative approaches to solving challenges in trial conduct are described and discussed. This trial is registered with http://Clinicaltrials.gov as protocol NCT00198822.
Restoration of dimensional reduction in the random-field Ising model at five dimensions
NASA Astrophysics Data System (ADS)
Fytas, Nikolaos G.; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D -2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D =5 . We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3 ≤D <6 to their values in the pure Ising model at D -2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Restoration of dimensional reduction in the random-field Ising model at five dimensions.
Fytas, Nikolaos G; Martín-Mayor, Víctor; Picco, Marco; Sourlas, Nicolas
2017-04-01
The random-field Ising model is one of the few disordered systems where the perturbative renormalization group can be carried out to all orders of perturbation theory. This analysis predicts dimensional reduction, i.e., that the critical properties of the random-field Ising model in D dimensions are identical to those of the pure Ising ferromagnet in D-2 dimensions. It is well known that dimensional reduction is not true in three dimensions, thus invalidating the perturbative renormalization group prediction. Here, we report high-precision numerical simulations of the 5D random-field Ising model at zero temperature. We illustrate universality by comparing different probability distributions for the random fields. We compute all the relevant critical exponents (including the critical slowing down exponent for the ground-state finding algorithm), as well as several other renormalization-group invariants. The estimated values of the critical exponents of the 5D random-field Ising model are statistically compatible to those of the pure 3D Ising ferromagnet. These results support the restoration of dimensional reduction at D=5. We thus conclude that the failure of the perturbative renormalization group is a low-dimensional phenomenon. We close our contribution by comparing universal quantities for the random-field problem at dimensions 3≤D<6 to their values in the pure Ising model at D-2 dimensions, and we provide a clear verification of the Rushbrooke equality at all studied dimensions.
Gao, Yue-Ming; Wu, Zhu-Mei; Pun, Sio-Hang; Mak, Peng-Un; Vai, Mang-I; Du, Min
2016-04-02
Existing research on human channel modeling of galvanic coupling intra-body communication (IBC) is primarily focused on the human body itself. Although galvanic coupling IBC is less disturbed by external influences during signal transmission, there are inevitable factors in real measurement scenarios such as the parasitic impedance of electrodes, impedance matching of the transceiver, etc. which might lead to deviations between the human model and the in vivo measurements. This paper proposes a field-circuit finite element method (FEM) model of galvanic coupling IBC in a real measurement environment to estimate the human channel gain. First an anisotropic concentric cylinder model of the electric field intra-body communication for human limbs was developed based on the galvanic method. Then the electric field model was combined with several impedance elements, which were equivalent in terms of parasitic impedance of the electrodes, input and output impedance of the transceiver, establishing a field-circuit FEM model. The results indicated that a circuit module equivalent to external factors can be added to the field-circuit model, which makes this model more complete, and the estimations based on the proposed field-circuit are in better agreement with the corresponding measurement results.
Mechanics of sucking: comparison between bottle feeding and breastfeeding
2010-01-01
Background There is very little evidence of the similarity of the mechanics of maternal and bottle feeding. We assessed the mechanics of sucking in exclusive breastfeeding, exclusive bottle feeding, and mixed feeding. The hypothesis established was that physiological pattern for suckling movements differ depending on the type of feeding. According to this hypothesis, babies with breastfeeding have suckling movements at the breast that are different from the movements of suckling a teat of babies fed with bottle. Children with mixed feeding mix both types of suckling movements. Methods Cross-sectional study of infants aged 21-28 days with only maternal feeding or bottle feeding (234 mother-infant pairs), and a randomized open cross-over field trial in newborns aged 21-28 days and babies aged 3-5 months with mixed feeding (125 mother-infant pairs). Primary outcome measures were sucks and pauses. Results Infants aged 21-28 days exclusively bottle-fed showed fewer sucks and the same number of pauses but of longer duration compared to breastfeeding. In mixed feeding, bottle feeding compared to breastfeeding showed the same number of sucks but fewer and shorter pauses, both at 21-28 days and at 3-5 months. The mean number of breastfeedings in a day (in the mixed feed group) was 5.83 ± 1.93 at 21-28 days and 4.42 ± 1.67 at 3-5 months. In the equivalence analysis of the mixed feed group, the 95% confidence interval for bottle feeding/breastfeeding ratio laid outside the range of equivalence, indicating 5.9-8.7% fewer suction movements, and fewer pauses, and shorter duration of them in bottle feeding compared with breastfeeding. Conclusions The mechanics of sucking in mixed feeding lay outside the range of equivalence comparing bottle feeding with breastfeeding, although differences were small. Children with mixed feeding would mix both types of sucking movements (breastfeeding and bottle feeding) during the learning stage and adopt their own pattern. PMID:20149217
NASA Astrophysics Data System (ADS)
Shankar Kumar, Ravi; Goswami, A.
2015-06-01
The article scrutinises the learning effect of the unit production time on optimal lot size for the uncertain and imprecise imperfect production process, wherein shortages are permissible and partially backlogged. Contextually, we contemplate the fuzzy chance of production process shifting from an 'in-control' state to an 'out-of-control' state and re-work facility of imperfect quality of produced items. The elapsed time until the process shifts is considered as a fuzzy random variable, and consequently, fuzzy random total cost per unit time is derived. Fuzzy expectation and signed distance method are used to transform the fuzzy random cost function into an equivalent crisp function. The results are illustrated with the help of numerical example. Finally, sensitivity analysis of the optimal solution with respect to major parameters is carried out.
Random distributed feedback fiber laser at 2.1 μm.
Jin, Xiaoxi; Lou, Zhaokai; Zhang, Hanwei; Xu, Jiangming; Zhou, Pu; Liu, Zejin
2016-11-01
We demonstrate a random distributed feedback fiber laser at 2.1 μm. A high-power pulsed Tm-doped fiber laser operating at 1.94 μm with a temporal duty ratio of 30% was employed as a pump laser to increase the equivalent incident pump power. A piece of 150 m highly GeO2-doped silica fiber that provides a strong Raman gain and random distributed feedbacks was used to act as the gain medium. The maximum output power reached 0.5 W with the optical efficiency of 9%, which could be further improved by more pump power and optimized fiber length. To the best of our knowledge, this is the first demonstration of random distributed feedback fiber laser at 2 μm band based on Raman gain.
Therapeutic equivalence of budesonide/formoterol delivered via breath-actuated inhaler vs pMDI.
Murphy, Kevin R; Dhand, Rajiv; Trudo, Frank; Uryniak, Tom; Aggarwal, Ajay; Eckerwall, Göran
2015-02-01
To assess equivalence of twice daily (bid) budesonide/formoterol (BUD/FM) 160/4.5 μg via breath-actuated metered-dose inhaler (BAI) versus pressurized metered-dose inhaler (pMDI). This 12-week, double-blind, multicenter, parallel-group study, randomized adolescents and adults (aged ≥12 years) with asthma (and ≥3 months daily use of inhaled corticosteroids) to BUD/FM BAI 2 × 160/4.5 μg bid, BUD/FM pMDI 2 × 160/4.5 μg bid, or BUD pMDI 2 × 160 μg bid. Inclusion required prebronchodilator forced expiratory volume in one second (FEV1) ≥45 to ≤85% predicted, and reversibility of ≥12% in FEV1 (ages 12 to <18 years) or ≥12% and 200 mL (ages ≥18 years). Confirmation that 60-min postdose FEV1 response to BUD/FM pMDI was superior to BUD pMDI was required before equivalence testing. Therapeutic equivalence was shown by treatment effect ratio of BUD/FM BAI vs BUD/FM pMDI on 60-min postdose FEV1 and predose FEV1 within confidence intervals (CIs) of 80-125%. Mean age of 214 randomized patients was 42.7 years. BUD/FM pMDI was superior to BUD pMDI (60-min postdose FEV1 treatment effect ratio, 1.10; 95% CI, 1.06-1.14; p < 0.001). Treatment effect ratios for BUD/FM BAI versus pMDI for 60-min postdose FEV1 (1.01; 95% CI, 0.97-1.05) and predose FEV1 (1.03; 95% CI, 0.99-1.08) were within predetermined CIs for therapeutic equivalence. Adverse event profiles, tolerability, and patient-reported ease of use were similar. BUD/FM 2 × 160/4.5 μg bid BAI is therapeutically equivalent to BUD/FM conventional pMDI. The introduction of BUD/FM BAI would expand options for delivering inhaled corticosteroid/long-acting β2-agonist combination therapy to patients with moderate-to-severe asthma. ClinicalTrials.gov NCT01360021. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.
Bayesian approach to non-Gaussian field statistics for diffusive broadband terahertz pulses.
Pearce, Jeremy; Jian, Zhongping; Mittleman, Daniel M
2005-11-01
We develop a closed-form expression for the probability distribution function for the field components of a diffusive broadband wave propagating through a random medium. We consider each spectral component to provide an individual observation of a random variable, the configurationally averaged spectral intensity. Since the intensity determines the variance of the field distribution at each frequency, this random variable serves as the Bayesian prior that determines the form of the non-Gaussian field statistics. This model agrees well with experimental results.
Metric Properties of Relativistic Rotating Frames with Axial Symmetry
NASA Astrophysics Data System (ADS)
Torres, S. A.; Arenas, J. R.
2017-07-01
This abstract summarizes our poster contribution to the conference. We study the properties of an axially symmetric stationary gravitational field, by considering the spacetime properties of an uniformly rotating frame and the Einstein's Equivalence Principle (EEP). To undertake this, the weak field and slow-rotation limit of the kerr metric are determined, by making a first-order perturbation to the metric of a rotating frame. Also, we show a local connection between the effects of centrifugal and Coriolis forces with the effects of an axially symmetric stationary weak gravitational field, by calculating the geodesic equations of a free particle. It is observed that these geodesic, applying the (EEP), are locally equivalent to the geodesic equations of a free particle on a rotating frame. Furthermore, some aditional properties as the Lense-Thirring effect, the Sagnac effect, among others are studied.
40 CFR 180.470 - Acetochlor; tolerances for residues.
Code of Federal Regulations, 2011 CFR
2011-07-01
... stoichiometric equivalents of acetochlor, in or on the following commodities: Commodity Parts per million Corn, field, forage 4.5 Corn, field, grain 0.05 Corn, field, stover 2.5 Corn, pop, grain 0.05 Corn, pop, stover 2.5 Corn, sweet, forage 1.5 Corn, sweet, kernels plus cob with husks removed 0.05 Corn, sweet...
40 CFR 180.470 - Acetochlor; tolerances for residues.
Code of Federal Regulations, 2013 CFR
2013-07-01
... stoichiometric equivalents of acetochlor, in or on the following commodities: Commodity Parts per million Corn, field, forage 4.5 Corn, field, grain 0.05 Corn, field, stover 2.5 Corn, pop, grain 0.05 Corn, pop, stover 2.5 Corn, sweet, forage 1.5 Corn, sweet, kernels plus cob with husks removed 0.05 Corn, sweet...
40 CFR 180.470 - Acetochlor; tolerances for residues.
Code of Federal Regulations, 2012 CFR
2012-07-01
... stoichiometric equivalents of acetochlor, in or on the following commodities: Commodity Parts per million Corn, field, forage 4.5 Corn, field, grain 0.05 Corn, field, stover 2.5 Corn, pop, grain 0.05 Corn, pop, stover 2.5 Corn, sweet, forage 1.5 Corn, sweet, kernels plus cob with husks removed 0.05 Corn, sweet...
40 CFR 180.470 - Acetochlor; tolerances for residues.
Code of Federal Regulations, 2010 CFR
2010-07-01
... stoichiometric equivalents of acetochlor, in or on the following commodities: Commodity Parts per million Corn, field, forage 4.5 Corn, field, grain 0.05 Corn, field, stover 2.5 Corn, pop, grain 0.05 Corn, pop, stover 2.5 Corn, sweet, forage 1.5 Corn, sweet, kernels plus cob with husks removed 0.05 Corn, sweet...
Perturbative Yang-Mills theory without Faddeev-Popov ghost fields
NASA Astrophysics Data System (ADS)
Huffel, Helmuth; Markovic, Danijel
2018-05-01
A modified Faddeev-Popov path integral density for the quantization of Yang-Mills theory in the Feynman gauge is discussed, where contributions of the Faddeev-Popov ghost fields are replaced by multi-point gauge field interactions. An explicit calculation to O (g2) shows the equivalence of the usual Faddeev-Popov scheme and its modified version.
NASA Astrophysics Data System (ADS)
Moyer, Steve; Uhl, Elizabeth R.
2015-05-01
For more than 50 years, the U.S. Army RDECOM CERDEC Night Vision and Electronic Sensors Directorate (NVESD) has been studying and modeling the human visual discrimination process as it pertains to military imaging systems. In order to develop sensor performance models, human observers are trained to expert levels in the identification of military vehicles. From 1998 until 2006, the experimental stimuli were block randomized, meaning that stimuli with similar difficulty levels (for example, in terms of distance from target, blur, noise, etc.) were presented together in blocks of approximately 24 images but the order of images within the block was random. Starting in 2006, complete randomization came into vogue, meaning that difficulty could change image to image. It was thought that this would provide a more statistically robust result. In this study we investigated the impact of the two types of randomization on performance in two groups of observers matched for skill to create equivalent groups. It is hypothesized that Soldiers in the Complete Randomized condition will have to shift their decision criterion more frequently than Soldiers in the Block Randomization group and this shifting is expected to impede performance so that Soldiers in the Block Randomized group perform better.
Free fall and the equivalence principle revisited
NASA Astrophysics Data System (ADS)
Pendrill, Ann-Marie
2017-11-01
Free fall is commonly discussed as an example of the equivalence principle, in the context of a homogeneous gravitational field, which is a reasonable approximation for small test masses falling moderate distances. Newton’s law of gravity provides a generalisation to larger distances, and also brings in an inhomogeneity in the gravitational field. In addition, Newton’s third law of action and reaction causes the Earth to accelerate towards the falling object, bringing in a mass dependence in the time required for an object to reach ground—in spite of the equivalence between inertial and gravitational mass. These aspects are rarely discussed in textbooks when the motion of everyday objects are discussed. Although these effects are extremely small, it may still be important for teachers to make assumptions and approximations explicit, to be aware of small corrections, and also to be prepared to estimate their size. Even if the corrections are not part of regular teaching, some students may reflect on them, and their questions deserve to be taken seriously.
Billington, D. Rex; Hsu, Patricia Hsien-Chuan; Feng, Xuan Joanna; Medvedev, Oleg N.; Kersten, Paula; Landon, Jason; Siegert, Richard J.
2016-01-01
The World Health Organisation Quality of Life (WHOQOL) questionnaires are widely used around the world and can claim strong cross-cultural validity due to their development in collaboration with international field centres. To enhance conceptual equivalence of quality of life across cultures, optional national items are often developed for use alongside the core instrument. The present study outlines the development of national items for the New Zealand WHOQOL-BREF. Focus groups with members of the community as well as health experts discussed what constitutes quality of life in their opinion. Based on themes extracted of aspects not contained in the existing WHOQOL instrument, 46 candidate items were generated and subsequently rated for their importance by a random sample of 585 individuals from the general population. Applying importance criteria reduced these items to 24, which were then sent to another large random sample (n = 808) to be rated alongside the existing WHOQOL-BREF. A final set of five items met the criteria for national items. Confirmatory factor analysis identified four national items as belonging to the psychological domain of quality of life, and one item to the social domain. Rasch analysis validated these results and generated ordinal-to-interval conversion algorithms to allow use of parametric statistics for domain scores with and without national items. PMID:27812203
Pettigrew, Jonathan; Miller-Day, Michelle; Krieger, Janice L.; Zhou, Jiangxiu; Hecht, Michael L.
2014-01-01
Random assignment to groups is the foundation for scientifically rigorous clinical trials. But assignment is challenging in group randomized trials when only a few units (schools) are assigned to each condition. In the DRSR project, we assigned 39 rural Pennsylvania and Ohio schools to three conditions (rural, classic, control). But even with 13 schools per condition, achieving pretest equivalence on important variables is not guaranteed. We collected data on six important school-level variables: rurality, number of grades in the school, enrollment per grade, percent white, percent receiving free/assisted lunch, and test scores. Key to our procedure was the inclusion of school-level drug use data, available for a subset of the schools. Also, key was that we handled the partial data with modern missing data techniques. We chose to create one composite stratifying variable based on the seven school-level variables available. Principal components analysis with the seven variables yielded two factors, which were averaged to form the composite inflate-suppress (CIS) score which was the basis of stratification. The CIS score was broken into three strata within each state; schools were assigned at random to the three program conditions from within each stratum, within each state. Results showed that program group membership was unrelated to the CIS score, the two factors making up the CIS score, and the seven items making up the factors. Program group membership was not significantly related to pretest measures of drug use (alcohol, cigarettes, marijuana, chewing tobacco; smallest p>.15), thus verifying that pretest equivalence was achieved. PMID:23722619
DOE Office of Scientific and Technical Information (OSTI.GOV)
Syme, Alasdair
2016-08-15
Purpose: To use Monte Carlo simulations to optimize the design of an organic field effect transistor (OFET) to maximize water-equivalence across the diagnostic and therapeutic photon energy ranges. Methods: DOSXYZnrc was used to simulate transport of mono-energetic photon beams through OFETs. Dose was scored in the dielectric region of devices and used for evaluating the response of the device relative to water. Two designs were considered: 1. a bottom-gate device on a substrate of polyethylene terephthalate (PET) with an aluminum gate, a dielectric layer of either PMMA or CYTOP (a fluorocarbon) and an organic semiconductor (pentacene). 2. a symmetric bilayermore » design was employed in which two polymer layers (PET and CYTOP) were deposited both below the gate and above the semiconductor to improve water-equivalence and reduce directional dependence. The relative thickness of the layers was optimized to maximize water-equivalence. Results: Without the bilayer, water-equivalence was diminished relative to OFETs with the symmetric bilayer at low photon energies (below 80 keV). The bilayer’s composition was designed to have one layer with an effective atomic number larger than that of water and the other with an effective atomic number lower than that of water. For the particular materials used in this study, a PET layer 0.1mm thick coupled with a CYTOP layer of 900 nm provided a device with a water-equivalence within 3% between 20 keV and 5 MeV. Conclusions: organic electronic devices hold tremendous potential as water-equivalent dosimeters that could be used in a wide range of applications without recalibration.« less
Other Questions with Respect to the Weak Equivalence Principle
NASA Astrophysics Data System (ADS)
Smarandache, Florentin
2017-01-01
A disc rotating at high speed will exert out-of-plane forces resembling an accelerating field. Is the principle of equivalence also applicable for this process? Will someone inside an elevator in free-fall and rotating around its vertical centre, feel a gravitational force? Or will he feel a gravitational force larger than what equivalence principle requires? Does the equivalence principle remain applicable here? An airplane flies at an altitude of 1 km. The co-pilot drops an elevator-room without a passenger inside it. After one second has elapsed, the co-pilot drops four grenades in the direction of the freely-falling elevator's path. The question: Will the grenades reach the elevator before it reaches the ground? If no, why? If yes, which grenade? How will the air resistance influence the outcome?
Induction of Micronuclei in Human Fibroblasts from the Los Alamos High Energy Neutron Beam
NASA Technical Reports Server (NTRS)
Cox, Bradley
2009-01-01
The space radiation field includes a broad spectrum of high energy neutrons. Interactions between these neutrons and a spacecraft, or other material, significantly contribute to the dose equivalent for astronauts. The 15 degree beam line in the Weapons Neutron Research beam at Los Alamos Nuclear Science Center generates a neutron spectrum relatively similar to that seen in space. Human foreskin fibroblast (AG1522) samples were irradiated behind 0 to 20 cm of water equivalent shielding. The cells were exposed to either a 0.05 or 0.2 Gy entrance dose. Following irradiation, micronuclei were counted to see how the water shield affects the beam and its damage to cell nuclei. Micronuclei induction was then compared with dose equivalent data provided from a tissue equivalent proportional counter.
Random walk study of electron motion in helium in crossed electromagnetic fields
NASA Technical Reports Server (NTRS)
Englert, G. W.
1972-01-01
Random walk theory, previously adapted to electron motion in the presence of an electric field, is extended to include a transverse magnetic field. In principle, the random walk approach avoids mathematical complexity and concomitant simplifying assumptions and permits determination of energy distributions and transport coefficients within the accuracy of available collisional cross section data. Application is made to a weakly ionized helium gas. Time of relaxation of electron energy distribution, determined by the random walk, is described by simple expressions based on energy exchange between the electron and an effective electric field. The restrictive effect of the magnetic field on electron motion, which increases the required number of collisions per walk to reach a terminal steady state condition, as well as the effect of the magnetic field on electron transport coefficients and mean energy can be quite adequately described by expressions involving only the Hall parameter.
Developing Business Writing Skills and Reducing Writing Anxiety of EFL Learners through Wikis
ERIC Educational Resources Information Center
Kassem, Mohamed Ali Mohamed
2017-01-01
The present study aimed at investigating the effect of using wikis on developing business writing skills and reducing writing anxiety of Business Administration students at Prince Sattam bin Abdul Aziz University, KSA. Sixty students, who were randomly chosen and divided into two equivalent groups: control and experimental, participated in the…
Impact of Thematic Approach on Communication Skill in Preschool
ERIC Educational Resources Information Center
Ashokan, Varun; Venugopal, Kalpana
2016-01-01
The study investigated the effects of thematic approach on communication skills for preschool children. The study was a quasi experimental non-equivalent pretest-post-test control group design whereby 5-6 year old preschool children (n = 49) were randomly assigned to an experimental and a control group. The experimental group students were exposed…
Anxiety and Self-Concept Among American and Chinese College Students
ERIC Educational Resources Information Center
Paschal, Billy J.; You-Yuh, Kuo
1973-01-01
In this study, 60 pairs of Ss were randomly selected and individually matched on age, sex, grade equivalence, and birth order. The seven null hypotheses dealt with culture, sex, birth order, and their interactions. The main self-rating scales employed were the IPAT Anxiety Scale and the Tennessee Self Concept Scale. (Author/EK)
Regression Discontinuity Design in Gifted and Talented Education Research
ERIC Educational Resources Information Center
Matthews, Michael S.; Peters, Scott J.; Housand, Angela M.
2012-01-01
This Methodological Brief introduces the reader to the regression discontinuity design (RDD), which is a method that when used correctly can yield estimates of research treatment effects that are equivalent to those obtained through randomized control trials and can therefore be used to infer causality. However, RDD does not require the random…
Gender-Based Differential Item Performance in Mathematics Achievement Items.
ERIC Educational Resources Information Center
Doolittle, Allen E.; Cleary, T. Anne
1987-01-01
Eight randomly equivalent samples of high school seniors were each given a unique form of the ACT Assessment Mathematics Usage Test (ACTM). Signed measures of differential item performance (DIP) were obtained for each item in the eight ACTM forms. DIP estimates were analyzed and a significant item category effect was found. (Author/LMO)
ERIC Educational Resources Information Center
Ruble, Lisa; McGrew, John H.; Toland, Michael D.
2012-01-01
Goal attainment scaling (GAS) holds promise as an idiographic approach for measuring outcomes of psychosocial interventions in community settings. GAS has been criticized for untested assumptions of scaling level (i.e., interval or ordinal), inter-individual equivalence and comparability, and reliability of coding across different behavioral…
Chemical mixtures in the environment are often the result of a dynamic process. When dose-response data are available on random samples throughout the process, equivalence testing can be used to determine whether the mixtures are sufficiently similar based on a pre-specified biol...
Using Kernel Equating to Assess Item Order Effects on Test Scores
ERIC Educational Resources Information Center
Moses, Tim; Yang, Wen-Ling; Wilson, Christine
2007-01-01
This study explored the use of kernel equating for integrating and extending two procedures proposed for assessing item order effects in test forms that have been administered to randomly equivalent groups. When these procedures are used together, they can provide complementary information about the extent to which item order effects impact test…
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2013 CFR
2013-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2014 CFR
2014-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2011 CFR
2011-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
49 CFR 219.4 - Recognition of a foreign railroad's workplace testing program.
Code of Federal Regulations, 2012 CFR
2012-10-01
... (Continued) FEDERAL RAILROAD ADMINISTRATION, DEPARTMENT OF TRANSPORTATION CONTROL OF ALCOHOL AND DRUG USE... contains equivalents to subparts B, E, F, and G of this part: (i) Pre-employment drug testing; (ii) A policy dealing with co-worker and self-reporting of alcohol and drug abuse problems; (iii) Random drug...
ERIC Educational Resources Information Center
Wang, Shudong; Wang, Ning; Hoadley, David
2007-01-01
This study used confirmatory factor analysis (CFA) to examine the comparability of the National Nurse Aide Assessment Program (NNAAP[TM]) test scores across language and administration condition groups for calibration and validation samples that were randomly drawn from the same population. Fit statistics supported both the calibration and…
An equivalent body surface charge model representing three-dimensional bioelectrical activity
NASA Technical Reports Server (NTRS)
He, B.; Chernyak, Y. B.; Cohen, R. J.
1995-01-01
A new surface-source model has been developed to account for the bioelectrical potential on the body surface. A single-layer surface-charge model on the body surface has been developed to equivalently represent bioelectrical sources inside the body. The boundary conditions on the body surface are discussed in relation to the surface-charge in a half-space conductive medium. The equivalent body surface-charge is shown to be proportional to the normal component of the electric field on the body surface just outside the body. The spatial resolution of the equivalent surface-charge distribution appears intermediate between those of the body surface potential distribution and the body surface Laplacian distribution. An analytic relationship between the equivalent surface-charge and the surface Laplacian of the potential was found for a half-space conductive medium. The effects of finite spatial sampling and noise on the reconstruction of the equivalent surface-charge were evaluated by computer simulations. It was found through computer simulations that the reconstruction of the equivalent body surface-charge from the body surface Laplacian distribution is very stable against noise and finite spatial sampling. The present results suggest that the equivalent body surface-charge model may provide an additional insight to our understanding of bioelectric phenomena.
Persistence and Lifelong Fidelity of Phase Singularities in Optical Random Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2017-11-17
Phase singularities are locations where light is twisted like a corkscrew, with positive or negative topological charge depending on the twisting direction. Among the multitude of singularities arising in random wave fields, some can be found at the same location, but only when they exhibit opposite topological charge, which results in their mutual annihilation. New pairs can be created as well. With near-field experiments supported by theory and numerical simulations, we study the persistence and pairing statistics of phase singularities in random optical fields as a function of the excitation wavelength. We demonstrate how such entities can encrypt fundamental properties of the random fields in which they arise.
Persistence and Lifelong Fidelity of Phase Singularities in Optical Random Waves
NASA Astrophysics Data System (ADS)
De Angelis, L.; Alpeggiani, F.; Di Falco, A.; Kuipers, L.
2017-11-01
Phase singularities are locations where light is twisted like a corkscrew, with positive or negative topological charge depending on the twisting direction. Among the multitude of singularities arising in random wave fields, some can be found at the same location, but only when they exhibit opposite topological charge, which results in their mutual annihilation. New pairs can be created as well. With near-field experiments supported by theory and numerical simulations, we study the persistence and pairing statistics of phase singularities in random optical fields as a function of the excitation wavelength. We demonstrate how such entities can encrypt fundamental properties of the random fields in which they arise.
NASA Astrophysics Data System (ADS)
Cao, Zhong; Miller, L. F.; Buckner, M.
In order to accurately determine dose equivalent in radiation fields that include both neutrons and photons, it is necessary to measure the relative number of neutrons to photons and to characterize the energy dependence of the neutrons. The relationship between dose and dose equivalent begins to increase rapidly at about 100 keV; thus, it is necessary to separate neutrons from photons for neutron energies as low as about 100 keV in order to measure dose equivalent in a mixed radiation field that includes both neutrons and photons. Preceptron and back propagation neural networks that use pulse amplitude and pulse rise time information obtain separation of neutron and photons with about 5% error for neutrons with energies as low as 100 keV, and this is accomplished for neutrons with energies that range from 100 keV to several MeV. If the ratio of neutrons to photons is changed by a factor of 10, the classification error increases to about 15% for the neural networks tested. A technique that uses the output from the preceptron as a priori for a Bayesian classifier is more robust to changes in the relative number of neutrons to photons, and it obtains a 5% classification error when this ratio is changed by a factor of ten. Results from this research demonstrate that it is feasible to use commercially available instrumentation in combination with artificial intelligence techniques to develop a practical detector that will accurately measure dose equivalent in mixed neutron-photon radiation fields.
Inverse Design of Low-Boom Supersonic Concepts Using Reversed Equivalent-Area Targets
NASA Technical Reports Server (NTRS)
Li, Wu; Rallabhand, Sriam
2011-01-01
A promising path for developing a low-boom configuration is a multifidelity approach that (1) starts from a low-fidelity low-boom design, (2) refines the low-fidelity design with computational fluid dynamics (CFD) equivalent-area (Ae) analysis, and (3) improves the design with sonic-boom analysis by using CFD off-body pressure distributions. The focus of this paper is on the third step of this approach, in which the design is improved with sonic-boom analysis through the use of CFD calculations. A new inverse design process for off-body pressure tailoring is formulated and demonstrated with a low-boom supersonic configuration that was developed by using the mixed-fidelity design method with CFD Ae analysis. The new inverse design process uses the reverse propagation of the pressure distribution (dp/p) from a mid-field location to a near-field location, converts the near-field dp/p into an equivalent-area distribution, generates a low-boom target for the reversed equivalent area (Ae,r) of the configuration, and modifies the configuration to minimize the differences between the configuration s Ae,r and the low-boom target. The new inverse design process is used to modify a supersonic demonstrator concept for a cruise Mach number of 1.6 and a cruise weight of 30,000 lb. The modified configuration has a fully shaped ground signature that has a perceived loudness (PLdB) value of 78.5, while the original configuration has a partially shaped aft signature with a PLdB of 82.3.
Galaxy–galaxy lensing estimators and their covariance properties
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros; ...
2017-07-21
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Coagulation kinetics beyond mean field theory using an optimised Poisson representation.
Burnett, James; Ford, Ian J
2015-05-21
Binary particle coagulation can be modelled as the repeated random process of the combination of two particles to form a third. The kinetics may be represented by population rate equations based on a mean field assumption, according to which the rate of aggregation is taken to be proportional to the product of the mean populations of the two participants, but this can be a poor approximation when the mean populations are small. However, using the Poisson representation, it is possible to derive a set of rate equations that go beyond mean field theory, describing pseudo-populations that are continuous, noisy, and complex, but where averaging over the noise and initial conditions gives the mean of the physical population. Such an approach is explored for the simple case of a size-independent rate of coagulation between particles. Analytical results are compared with numerical computations and with results derived by other means. In the numerical work, we encounter instabilities that can be eliminated using a suitable "gauge" transformation of the problem [P. D. Drummond, Eur. Phys. J. B 38, 617 (2004)] which we show to be equivalent to the application of the Cameron-Martin-Girsanov formula describing a shift in a probability measure. The cost of such a procedure is to introduce additional statistical noise into the numerical results, but we identify an optimised gauge transformation where this difficulty is minimal for the main properties of interest. For more complicated systems, such an approach is likely to be computationally cheaper than Monte Carlo simulation.
Galaxy–galaxy lensing estimators and their covariance properties
DOE Office of Scientific and Technical Information (OSTI.GOV)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uros
Here, we study the covariance properties of real space correlation function estimators – primarily galaxy–shear correlations, or galaxy–galaxy lensing – using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens densitymore » field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.« less
Galaxy-galaxy lensing estimators and their covariance properties
NASA Astrophysics Data System (ADS)
Singh, Sukhdeep; Mandelbaum, Rachel; Seljak, Uroš; Slosar, Anže; Vazquez Gonzalez, Jose
2017-11-01
We study the covariance properties of real space correlation function estimators - primarily galaxy-shear correlations, or galaxy-galaxy lensing - using SDSS data for both shear catalogues and lenses (specifically the BOSS LOWZ sample). Using mock catalogues of lenses and sources, we disentangle the various contributions to the covariance matrix and compare them with a simple analytical model. We show that not subtracting the lensing measurement around random points from the measurement around the lens sample is equivalent to performing the measurement using the lens density field instead of the lens overdensity field. While the measurement using the lens density field is unbiased (in the absence of systematics), its error is significantly larger due to an additional term in the covariance. Therefore, this subtraction should be performed regardless of its beneficial effects on systematics. Comparing the error estimates from data and mocks for estimators that involve the overdensity, we find that the errors are dominated by the shape noise and lens clustering, which empirically estimated covariances (jackknife and standard deviation across mocks) that are consistent with theoretical estimates, and that both the connected parts of the four-point function and the supersample covariance can be neglected for the current levels of noise. While the trade-off between different terms in the covariance depends on the survey configuration (area, source number density), the diagnostics that we use in this work should be useful for future works to test their empirically determined covariances.
Turner, James D; Henshaw, Daryl S; Weller, Robert S; Jaffe, J Douglas; Edwards, Christopher J; Reynolds, J Wells; Russell, Gregory B; Dobson, Sean W
2018-05-08
To determine whether perineural dexamethasone prolongs peripheral nerve blockade (PNB) when measured objectively; and to determine if a 1 mg and 4 mg dose provide equivalent PNB prolongation compared to PNB without dexamethasone. Multiple studies have reported that perineural dexamethasone added to local anesthetics (LA) can prolong PNB. However, these studies have relied on subjective end-points to quantify PNB duration. The optimal dose remains unknown. We hypothesized that 1 mg of perineural dexamethasone would be equivalent in prolonging an adductor canal block (ACB) when compared to 4 mg of dexamethasone, and that both doses would be superior to an ACB performed without dexamethasone. This was a prospective, randomized, double-blind, placebo-controlled equivalency trial involving 85 patients undergoing a unicompartmental knee arthroplasty. All patients received an ACB with 20 ml of 0.25% bupivacaine with 1:400,000 epinephrine. Twelve patients had 0 mg of dexamethasone (placebo) added to the LA mixture; 36 patients had 1 mg of dexamethasone in the LA; and 37 patients had 4 mg of dexamethasone in the LA. The primary outcome was block duration determined by serial neurologic pinprick examinations. Secondary outcomes included time to first analgesic, serial pain scores, and cumulative opioid consumption. The 1 mg (31.8 ± 10.5 h) and 4 mg (37.9 ± 10 h) groups were not equivalent, TOST [Mean difference (95% CI); 6.1 (-10.5, -2.3)]. Also, the 4 mg group was superior to the 1 mg group (p-value = 0.035), and the placebo group (29.7 ± 6.8 h, p-value = 0.011). There were no differences in opioid consumption or time to analgesic request; however, some pain scores were significantly lower in the dexamethasone groups when compared to placebo. Dexamethasone 4 mg, but not 1 mg, prolonged the duration of an ACB when measured by serial neurologic pinprick exams. NCT02462148. Copyright © 2018 Elsevier Inc. All rights reserved.
47 CFR 80.217 - Suppression of interference aboard ships.
Code of Federal Regulations, 2011 CFR
2011-10-01
... to any receiver required by statute or treaty. (b) The electromagnetic field from receivers required... mile from the receiver: Frequency of interfering emissions Field intensity in microvolts per meter... following amounts of power, to an artificial antenna having electrical characteristics equivalent to those...
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum
NASA Astrophysics Data System (ADS)
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 104 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10-9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10-14 for a pendulum dipole less than 10-9 A m2. The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ˜ 10-14.
Magnetic effect in the test of the weak equivalence principle using a rotating torsion pendulum.
Zhu, Lin; Liu, Qi; Zhao, Hui-Hui; Yang, Shan-Qing; Luo, Pengshun; Shao, Cheng-Gang; Luo, Jun
2018-04-01
The high precision test of the weak equivalence principle (WEP) using a rotating torsion pendulum requires thorough analysis of systematic effects. Here we investigate one of the main systematic effects, the coupling of the ambient magnetic field to the pendulum. It is shown that the dominant term, the interaction between the average magnetic field and the magnetic dipole of the pendulum, is decreased by a factor of 1.1 × 10 4 with multi-layer magnetic shield shells. The shield shells reduce the magnetic field to 1.9 × 10 -9 T in the transverse direction so that the dipole-interaction limited WEP test is expected at η ≲ 10 -14 for a pendulum dipole less than 10 -9 A m 2 . The high-order effect, the coupling of the magnetic field gradient to the magnetic quadrupole of the pendulum, would also contribute to the systematic errors for a test precision down to η ∼ 10 -14 .
Comparison of Mixing Calculations for Reacting and Non-Reacting Flows in a Cylindrical Duct
NASA Technical Reports Server (NTRS)
Oechsle, V. L.; Mongia, H. C.; Holdeman, J. D.
1994-01-01
A production 3-D elliptic flow code has been used to calculate non-reacting and reacting flow fields in an experimental mixing section relevant to a rich burn/quick mix/lean burn (RQL) combustion system. A number of test cases have been run to assess the effects of the variation in the number of orifices, mass flow ratio, and rich-zone equivalence ratio on the flow field and mixing rates. The calculated normalized temperature profiles for the non-reacting flow field agree qualitatively well with the normalized conserved variable isopleths for the reacting flow field indicating that non-reacting mixing experiments are appropriate for screening and ranking potential rapid mixing concepts. For a given set of jet momentum-flux ratio, mass flow ratio, and density ratio (J, MR, and DR), the reacting flow calculations show a reduced level of mixing compared to the non-reacting cases. In addition, the rich-zone equivalence ratio has noticeable effect on the mixing flow characteristics for reacting flows.
Peters, John C; Beck, Jimikaye; Cardel, Michelle; Wyatt, Holly R; Foster, Gary D; Pan, Zhaoxing; Wojtanowski, Alexis C; Vander Veur, Stephanie S; Herring, Sharon J; Brill, Carrie; Hill, James O
2016-02-01
To evaluate the effects of water versus beverages sweetened with non-nutritive sweeteners (NNS) on body weight in subjects enrolled in a year-long behavioral weight loss treatment program. The study used a randomized equivalence design with NNS or water beverages as the main factor in a trial among 303 weight-stable people with overweight and obesity. All participants participated in a weight loss program plus assignment to consume 24 ounces (710 ml) of water or NNS beverages daily for 1 year. NNS and water treatments were non-equivalent, with NNS treatment showing greater weight loss at the end of 1 year. At 1 year subjects receiving water had maintained a 2.45 ± 5.59 kg weight loss while those receiving NNS beverages maintained a loss of 6.21 ± 7.65 kg (P < 0.001 for difference). Water and NNS beverages were not equivalent for weight loss and maintenance during a 1-year behavioral treatment program. NNS beverages were superior for weight loss and weight maintenance in a population consisting of regular users of NNS beverages who either maintained or discontinued consumption of these beverages and consumed water during a structured weight loss program. These results suggest that NNS beverages can be an effective tool for weight loss and maintenance within the context of a weight management program. © 2015 The Authors, Obesity published by Wiley Periodicals, Inc. on behalf of The Obesity Society (TOS).
Random scalar fields and hyperuniformity
NASA Astrophysics Data System (ADS)
Ma, Zheng; Torquato, Salvatore
2017-06-01
Disordered many-particle hyperuniform systems are exotic amorphous states of matter that lie between crystals and liquids. Hyperuniform systems have attracted recent attention because they are endowed with novel transport and optical properties. Recently, the hyperuniformity concept has been generalized to characterize two-phase media, scalar fields, and random vector fields. In this paper, we devise methods to explicitly construct hyperuniform scalar fields. Specifically, we analyze spatial patterns generated from Gaussian random fields, which have been used to model the microwave background radiation and heterogeneous materials, the Cahn-Hilliard equation for spinodal decomposition, and Swift-Hohenberg equations that have been used to model emergent pattern formation, including Rayleigh-Bénard convection. We show that the Gaussian random scalar fields can be constructed to be hyperuniform. We also numerically study the time evolution of spinodal decomposition patterns and demonstrate that they are hyperuniform in the scaling regime. Moreover, we find that labyrinth-like patterns generated by the Swift-Hohenberg equation are effectively hyperuniform. We show that thresholding (level-cutting) a hyperuniform Gaussian random field to produce a two-phase random medium tends to destroy the hyperuniformity of the progenitor scalar field. We then propose guidelines to achieve effectively hyperuniform two-phase media derived from thresholded non-Gaussian fields. Our investigation paves the way for new research directions to characterize the large-structure spatial patterns that arise in physics, chemistry, biology, and ecology. Moreover, our theoretical results are expected to guide experimentalists to synthesize new classes of hyperuniform materials with novel physical properties via coarsening processes and using state-of-the-art techniques, such as stereolithography and 3D printing.
Feldon, Steven E
2004-01-01
ABSTRACT Purpose To validate a computerized expert system evaluating visual fields in a prospective clinical trial, the Ischemic Optic Neuropathy Decompression Trial (IONDT). To identify the pattern and within-pattern severity of field defects for study eyes at baseline and 6-month follow-up. Design Humphrey visual field (HVF) change was used as the outcome measure for a prospective, randomized, multi-center trial to test the null hypothesis that optic nerve sheath decompression was ineffective in treating nonarteritic anterior ischemic optic neuropathy and to ascertain the natural history of the disease. Methods An expert panel established criteria for the type and severity of visual field defects. Using these criteria, a rule-based computerized expert system interpreted HVF from baseline and 6-month visits for patients randomized to surgery or careful follow-up and for patients who were not randomized. Results A computerized expert system was devised and validated. The system was then used to analyze HVFs. The pattern of defects found at baseline for patients randomized to surgery did not differ from that of patients randomized to careful follow-up. The most common pattern of defect was a superior and inferior arcuate with central scotoma for randomized eyes (19.2%) and a superior and inferior arcuate for nonrandomized eyes (30.6%). Field patterns at 6 months and baseline were not different. For randomized study eyes, the superior altitudinal defects improved (P = .03), as did the inferior altitudinal defects (P = .01). For nonrandomized study eyes, only the inferior altitudinal defects improved (P = .02). No treatment effect was noted. Conclusions A novel rule-based expert system successfully interpreted visual field defects at baseline of eyes enrolled in the IONDT. PMID:15747764
Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment
NASA Astrophysics Data System (ADS)
Diao, Y. L.; Sun, W. N.; He, Y. Q.; Leung, S. W.; Siu, Y. M.
2017-10-01
In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort—the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.
Equivalent magnetic vector potential model for low-frequency magnetic exposure assessment.
Diao, Y L; Sun, W N; He, Y Q; Leung, S W; Siu, Y M
2017-09-21
In this paper, a novel source model based on a magnetic vector potential for the assessment of induced electric field strength in a human body exposed to the low-frequency (LF) magnetic field of an electrical appliance is presented. The construction of the vector potential model requires only a single-component magnetic field to be measured close to the appliance under test, hence relieving considerable practical measurement effort-the radial basis functions (RBFs) are adopted for the interpolation of discrete measurements; the magnetic vector potential model can then be directly constructed by summing a set of simple algebraic functions of RBF parameters. The vector potentials are then incorporated into numerical calculations as the equivalent source for evaluations of the induced electric field in the human body model. The accuracy and effectiveness of the proposed model are demonstrated by comparing the induced electric field in a human model to that of the full-wave simulation. This study presents a simple and effective approach for modelling the LF magnetic source. The result of this study could simplify the compliance test procedure for assessing an electrical appliance regarding LF magnetic exposure.
Spatial sound field synthesis and upmixing based on the equivalent source method.
Bai, Mingsian R; Hsu, Hoshen; Wen, Jheng-Ciang
2014-01-01
Given scarce number of recorded signals, spatial sound field synthesis with an extended sweet spot is a challenging problem in acoustic array signal processing. To address the problem, a synthesis and upmixing approach inspired by the equivalent source method (ESM) is proposed. The synthesis procedure is based on the pressure signals recorded by a microphone array and requires no source model. The array geometry can also be arbitrary. Four upmixing strategies are adopted to enhance the resolution of the reproduced sound field when there are more channels of loudspeakers than the microphones. Multi-channel inverse filtering with regularization is exploited to deal with the ill-posedness in the reconstruction process. The distance between the microphone and loudspeaker arrays is optimized to achieve the best synthesis quality. To validate the proposed system, numerical simulations and subjective listening experiments are performed. The results demonstrated that all upmixing methods improved the quality of reproduced target sound field over the original reproduction. In particular, the underdetermined ESM interpolation method yielded the best spatial sound field synthesis in terms of the reproduction error, timbral quality, and spatial quality.
Equivalent Hamiltonian for the Lee model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jones, H. F.
2008-03-15
Using the techniques of quasi-Hermitian quantum mechanics and quantum field theory we use a similarity transformation to construct an equivalent Hermitian Hamiltonian for the Lee model. In the field theory confined to the V/N{theta} sector it effectively decouples V, replacing the three-point interaction of the original Lee model by an additional mass term for the V particle and a four-point interaction between N and {theta}. While the construction is originally motivated by the regime where the bare coupling becomes imaginary, leading to a ghost, it applies equally to the standard Hermitian regime where the bare coupling is real. In thatmore » case the similarity transformation becomes a unitary transformation.« less
Persinger, Michael A; Dotta, Blake T; Karbowski, Lukasz M; Murugan, Nirosha J
2015-01-01
The quantitative relationship between local changes in magnetic fields and photon emissions within ∼2 mm of aggregates of 10(5)-10(6) cells was explored experimentally. The vertical component of the earth's magnetic field as measured by different magnetometers was ∼15 nT higher when plates of cells removed from incubation were measured compared to plates containing only medium. Additional experiments indicated an inverse relationship over the first ∼45 min between changes in photon counts (∼10(-12) W·m(-2)) following removal from incubation and similar changes in magnetic field intensity. Calculations indicated that the energy within the aqueous volume containing the cells was equivalent for that associated with the flux densities of the magnetic fields and the photon emissions. For every approximately 1 nT increase in magnetic field intensity value there was a decrease of ∼2 photons (equivalent of 10(-18) J). These results complement correlation studies and suggest there may be a conservation of energy between expression as magnetic fields that are subtracted or added to the adjacent geomagnetic field and reciprocal changes in photon emissions when aggregates of cells within a specific volume of medium (water) adapt to new environments.
Rossi, G P; Seccia, T M; Miotto, D; Zucchetta, P; Cecchin, D; Calò, L; Puato, M; Motta, R; Caielli, P; Vincenzi, M; Ramondo, G; Taddei, S; Ferri, C; Letizia, C; Borghi, C; Morganti, A; Pessina, A C
2012-08-01
It is unclear whether revascularization of renal artery stenosis (RAS) by means of percutaneous renal angioplasty and stenting (PTRAS) is advantageous over optimal medical therapy. Hence, we designed a randomized clinical trial based on an optimized patient selection strategy and hard experimental endpoints. Primary objective of this study is to determine whether PTRAS is superior or equivalent to optimal medical treatment for preserving glomerular filtration rate (GFR) in the ischemic kidney as assessed by 99mTcDTPA sequential renal scintiscan. Secondary objectives of this study are to establish whether the two treatments are equivalent in lowering blood pressure, preserving overall renal function and regressing target organ damage, preventing cardiovascular events and improving quality of life. The study is designed as a prospective multicentre randomized, un-blinded two-arm study. Eligible patients will have clinical and angio-CT evidence of RAS. Inclusion criteria is RAS affecting the main renal artery or its major branches either >70% or, if <70, with post-stenotic dilatation. Renal function will be assessed with 99mTc-DTPA renal scintigraphy. Patients will be randomized to either arms considering both resistance index value in the ischemic kidney and the presence of unilateral/bilateral stenosis. Primary experimental endpoint will be the GFR of the ischemic kidney, assessed as quantitative variable by 99TcDTPA, and the loss of ischemic kidney defined as a categorical variable.
Nakajima, Takuya; Roggia, Murilo F; Noda, Yasuo; Ueta, Takashi
2015-09-01
To evaluate the effect of internal limiting membrane (ILM) peeling during vitrectomy for diabetic macular edema. MEDLINE, EMBASE, and CENTRAL were systematically reviewed. Eligible studies included randomized or nonrandomized studies that compared surgical outcomes of vitrectomy with or without ILM peeling for diabetic macular edema. The primary and secondary outcome measures were postoperative best-corrected visual acuity and central macular thickness. Meta-analysis on mean differences between vitrectomy with and without ILM peeling was performed using inverse variance method in random effects. Five studies (7 articles) with 741 patients were eligible for analysis. Superiority (95% confidence interval) in postoperative best-corrected visual acuity in ILM peeling group compared with nonpeeling group was 0.04 (-0.05 to 0.13) logMAR (equivalent to 2.0 ETDRS letters, P = 0.37), and superiority in best-corrected visual acuity change in ILM peeling group was 0.04 (-0.02 to 0.09) logMAR (equivalent to 2.0 ETDRS letters, P = 0.16). There was no significant difference in postoperative central macular thickness and central macular thickness reduction between the two groups. The visual acuity outcomes using pars plana vitrectomy with ILM peeling versus no ILM peeling were not significantly different. A larger randomized prospective study would be necessary to adequately address the effectiveness of ILM peeling on visual acuity outcomes.
Random Assignment: Practical Considerations from Field Experiments.
ERIC Educational Resources Information Center
Dunford, Franklyn W.
1990-01-01
Seven qualitative issues associated with randomization that have the potential to weaken or destroy otherwise sound experimental designs are reviewed and illustrated via actual field experiments. Issue areas include ethics and legality, liability risks, manipulation of randomized outcomes, hidden bias, design intrusiveness, case flow, and…
A network approach to the geometric structure of shallow cloud fields
NASA Astrophysics Data System (ADS)
Glassmeier, F.; Feingold, G.
2017-12-01
The representation of shallow clouds and their radiative impact is one of the largest challenges for global climate models. While the bulk properties of cloud fields, including effects of organization, are a very active area of research, the potential of the geometric arrangement of cloud fields for the development of new parameterizations has hardly been explored. Self-organized patterns are particularly evident in the cellular structure of Stratocumulus (Sc) clouds so readily visible in satellite imagery. Inspired by similar patterns in biology and physics, we approach pattern formation in Sc fields from the perspective of natural cellular networks. Our network analysis is based on large-eddy simulations of open- and closed-cell Sc cases. We find the network structure to be neither random nor characteristic to natural convection. It is independent of macroscopic cloud fields properties like the Sc regime (open vs closed) and its typical length scale (boundary layer height). The latter is a consequence of entropy maximization (Lewis's Law with parameter 0.16). The cellular pattern is on average hexagonal, where non-6 sided cells occur according to a neighbor-number distribution variance of about 2. Reflecting the continuously renewing dynamics of Sc fields, large (many-sided) cells tend to neighbor small (few-sided) cells (Aboav-Weaire Law with parameter 0.9). These macroscopic network properties emerge independent of the Sc regime because the different processes governing the evolution of closed as compared to open cells correspond to topologically equivalent network dynamics. By developing a heuristic model, we show that open and closed cell dynamics can both be mimicked by versions of cell division and cell disappearance and are biased towards the expansion of smaller cells. This model offers for the first time a fundamental and universal explanation for the geometric pattern of Sc clouds. It may contribute to the development of advanced Sc parameterizations. As an outlook, we discuss how a similar network approach can be applied to describe and quantify the geometric structure of shallow cumulus cloud fields.
Peter F. Ffolliott; Gerald J. Gottfried
2010-01-01
Field measurements and computer-based predictions suggest that the magnitudes of seasonal peak snowpack water equivalents are becoming less and the timing of these peaks is occurring earlier in the snowmelt-runoff season of the western United States. These changes in peak snowpack conditions have often been attributed to a warming of the regional climate. To determine...
Radar Cross Section Prediction for Coated Perfect Conductors with Arbitrary Geometries.
1986-01-01
equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the boundary surfaces. The method of...matrix inversion for the unknown surface current coefficients. Huygens’ principle is again applied to calculate the scattered electric field produced...differential equations with the equivalent electric and magnetic surface currents as the desired unknowns. Triangular patch modelling is ap- plied to the
New constraints on modelling the random magnetic field of the MW
NASA Astrophysics Data System (ADS)
Beck, Marcus C.; Beck, Alexander M.; Beck, Rainer; Dolag, Klaus; Strong, Andrew W.; Nielaba, Peter
2016-05-01
We extend the description of the isotropic and anisotropic random component of the small-scale magnetic field within the existing magnetic field model of the Milky Way from Jansson & Farrar, by including random realizations of the small-scale component. Using a magnetic-field power spectrum with Gaussian random fields, the NE2001 model for the thermal electrons and the Galactic cosmic-ray electron distribution from the current GALPROP model we derive full-sky maps for the total and polarized synchrotron intensity as well as the Faraday rotation-measure distribution. While previous work assumed that small-scale fluctuations average out along the line-of-sight or which only computed ensemble averages of random fields, we show that these fluctuations need to be carefully taken into account. Comparing with observational data we obtain not only good agreement with 408 MHz total and WMAP7 22 GHz polarized intensity emission maps, but also an improved agreement with Galactic foreground rotation-measure maps and power spectra, whose amplitude and shape strongly depend on the parameters of the random field. We demonstrate that a correlation length of 0≈22 pc (05 pc being a 5σ lower limit) is needed to match the slope of the observed power spectrum of Galactic foreground rotation-measure maps. Using multiple realizations allows us also to infer errors on individual observables. We find that previously-used amplitudes for random and anisotropic random magnetic field components need to be rescaled by factors of ≈0.3 and 0.6 to account for the new small-scale contributions. Our model predicts a rotation measure of -2.8±7.1 rad/m2 and 04.4±11. rad/m2 for the north and south Galactic poles respectively, in good agreement with observations. Applying our model to deflections of ultra-high-energy cosmic rays we infer a mean deflection of ≈3.5±1.1 degree for 60 EeV protons arriving from CenA.
Noise exposure estimates of urban MP3 player users.
Levey, Sandra; Levey, Tania; Fligor, Brian J
2011-02-01
To examine the sound level and duration of use of personal listening devices (PLDs) by 189 college students, ages 18-53 years, as they entered a New York City college campus, to determine whether noise exposure from PLDs was in excess of recommended exposure limits and what factors might influence exposure. Free-field equivalent sound levels from PLD headphones were measured on a mannequin with a calibrated sound level meter. Participants reported demographic information, whether they had just come off the subway, the type of PLD and earphones used, and duration per day and days per week they used their PLDs. Based on measured free-field equivalent sound levels from PLD headphones and the reported PLD use, per day 58.2% of participants exceeded 85 dB A-weighted 8-hr equivalent sound levels (L(Aeq)), and per week 51.9% exceeded 85 dB A-weighted 40-hr equivalent continuous sound levels (L(Awkn)). The majority of PLD users exceeded recommended sound exposure limits, suggesting that they were at increased risk for noise-induced hearing loss. Analyses of the demographics of these participants and mode of transportation to campus failed to indicate any particular gender differences in PLD use or in mode of transportation influencing sound exposure.
The hopf algebra of vector fields on complex quantum groups
NASA Astrophysics Data System (ADS)
Drabant, Bernhard; Jurčo, Branislav; Schlieker, Michael; Weich, Wolfgang; Zumino, Bruno
1992-10-01
We derive the equivalence of the complex quantum enveloping algebra and the algebra of complex quantum vector fields for the Lie algebra types A n , B n , C n , and D n by factorizing the vector fields uniquely into a triangular and a unitary part and identifying them with the corresponding elements of the algebra of regular functionals.
Johansen, Mette Yun; MacDonald, Christopher Scott; Hansen, Katrine Bagge; Karstoft, Kristian; Christensen, Robin; Pedersen, Maria; Hansen, Louise Seier; Zacho, Morten; Wedell-Neergaard, Anne-Sophie; Nielsen, Signe Tellerup; Iepsen, Ulrik Wining; Langberg, Henning; Vaag, Allan Arthur; Pedersen, Bente Klarlund; Ried-Larsen, Mathias
2017-08-15
It is unclear whether a lifestyle intervention can maintain glycemic control in patients with type 2 diabetes. To test whether an intensive lifestyle intervention results in equivalent glycemic control compared with standard care and, secondarily, leads to a reduction in glucose-lowering medication in participants with type 2 diabetes. Randomized, assessor-blinded, single-center study within Region Zealand and the Capital Region of Denmark (April 2015-August 2016). Ninety-eight adult participants with non-insulin-dependent type 2 diabetes who were diagnosed for less than 10 years were included. Participants were randomly assigned (2:1; stratified by sex) to the lifestyle group (n = 64) or the standard care group (n = 34). All participants received standard care with individual counseling and standardized, blinded, target-driven medical therapy. Additionally, the lifestyle intervention included 5 to 6 weekly aerobic training sessions (duration 30-60 minutes), of which 2 to 3 sessions were combined with resistance training. The lifestyle participants received dietary plans aiming for a body mass index of 25 or less. Participants were followed up for 12 months. Primary outcome was change in hemoglobin A1c (HbA1c) from baseline to 12-month follow-up, and equivalence was prespecified by a CI margin of ±0.4% based on the intention-to-treat population. Superiority analysis was performed on the secondary outcome reductions in glucose-lowering medication. Among 98 randomized participants (mean age, 54.6 years [SD, 8.9]; women, 47 [48%]; mean baseline HbA1c, 6.7%), 93 participants completed the trial. From baseline to 12-month follow-up, the mean HbA1c level changed from 6.65% to 6.34% in the lifestyle group and from 6.74% to 6.66% in the standard care group (mean between-group difference in change of -0.26% [95% CI, -0.52% to -0.01%]), not meeting the criteria for equivalence (P = .15). Reduction in glucose-lowering medications occurred in 47 participants (73.5%) in the lifestyle group and 9 participants (26.4%) in the standard care group (difference, 47.1 percentage points [95% CI, 28.6-65.3]). There were 32 adverse events (most commonly musculoskeletal pain or discomfort and mild hypoglycemia) in the lifestyle group and 5 in the standard care group. Among adults with type 2 diabetes diagnosed for less than 10 years, a lifestyle intervention compared with standard care resulted in a change in glycemic control that did not reach the criterion for equivalence, but was in a direction consistent with benefit. Further research is needed to assess superiority, as well as generalizability and durability of findings. clinicaltrials.gov Identifier: NCT02417012.
Code of Federal Regulations, 2011 CFR
2011-07-01
... who are well adapted to intraocular lens implant or contact lens correction, visual field examinations.... For aphakic individuals not well adapted to contact lens correction or pseudophakic individuals not... meridians 221/2 degrees apart for each eye and indicate the Goldmann equivalent used. See Table III for the...
Code of Federal Regulations, 2010 CFR
2010-07-01
... who are well adapted to intraocular lens implant or contact lens correction, visual field examinations.... For aphakic individuals not well adapted to contact lens correction or pseudophakic individuals not... meridians 221/2 degrees apart for each eye and indicate the Goldmann equivalent used. See Table III for the...
NASA Technical Reports Server (NTRS)
Krisher, Timothy P.
1996-01-01
We consider the gravitational redshift effect measured by an observer in a local freely failing frame (LFFF) in the gravitational field of a massive body. For purely metric theories of gravity, the metric in a LFFF is expected to differ from that of flat spacetime by only "tidal" terms of order (GM/c(exp 2)R)(r'/R )(exp 2), where R is the distance of the observer from the massive body, and r' is the coordinate separation relative to the origin of the LFFF. A simple derivation shows that a violation of the equivalence principle for certain types of "clocks" could lead to a larger apparent redshift effect of order (1 - alpha)(G M/c(exp 2)R)(r'/R), where alpha parametrizes the violation (alpha = 1 for purely metric theories, such as general relativity). Therefore, redshift experiments in a LFFF with separated clocks can provide a new null test of the equivalence principle. With presently available technology, it is possible to reach an accuracy of 0.01% in the gravitational field of the Sun using an atomic clock orbiting the Earth. A 1% test in the gravitational field of the galaxy would be possible if an atomic frequency standard were flown on a space mission to the outer solar system.
Reproduction of exact solutions of Lipkin model by nonlinear higher random-phase approximation
NASA Astrophysics Data System (ADS)
Terasaki, J.; Smetana, A.; Šimkovic, F.; Krivoruchenko, M. I.
2017-10-01
It is shown that the random-phase approximation (RPA) method with its nonlinear higher generalization, which was previously considered as approximation except for a very limited case, reproduces the exact solutions of the Lipkin model. The nonlinear higher RPA is based on an equation nonlinear on eigenvectors and includes many-particle-many-hole components in the creation operator of the excited states. We demonstrate the exact character of solutions analytically for the particle number N = 2 and numerically for N = 8. This finding indicates that the nonlinear higher RPA is equivalent to the exact Schrödinger equation.
NASA Technical Reports Server (NTRS)
Argentiero, P.; Lowrey, B.
1977-01-01
The least squares collocation algorithm for estimating gravity anomalies from geodetic data is shown to be an application of the well known regression equations which provide the mean and covariance of a random vector (gravity anomalies) given a realization of a correlated random vector (geodetic data). It is also shown that the collocation solution for gravity anomalies is equivalent to the conventional least-squares-Stokes' function solution when the conventional solution utilizes properly weighted zero a priori estimates. The mathematical and physical assumptions underlying the least squares collocation estimator are described.
Spatial Distribution of Phase Singularities in Optical Random Vector Waves.
De Angelis, L; Alpeggiani, F; Di Falco, A; Kuipers, L
2016-08-26
Phase singularities are dislocations widely studied in optical fields as well as in other areas of physics. With experiment and theory we show that the vectorial nature of light affects the spatial distribution of phase singularities in random light fields. While in scalar random waves phase singularities exhibit spatial distributions reminiscent of particles in isotropic liquids, in vector fields their distribution for the different vector components becomes anisotropic due to the direct relation between propagation and field direction. By incorporating this relation in the theory for scalar fields by Berry and Dennis [Proc. R. Soc. A 456, 2059 (2000)], we quantitatively describe our experiments.
Spherical-earth gravity and magnetic anomaly modeling by Gauss-Legendre quadrature integration
NASA Technical Reports Server (NTRS)
Von Frese, R. R. B.; Hinze, W. J.; Braile, L. W.; Luca, A. J.
1981-01-01
Gauss-Legendre quadrature integration is used to calculate the anomalous potential of gravity and magnetic fields and their spatial derivatives on a spherical earth. The procedure involves representation of the anomalous source as a distribution of equivalent point gravity poles or point magnetic dipoles. The distribution of equivalent point sources is determined directly from the volume limits of the anomalous body. The variable limits of integration for an arbitrarily shaped body are obtained from interpolations performed on a set of body points which approximate the body's surface envelope. The versatility of the method is shown by its ability to treat physical property variations within the source volume as well as variable magnetic fields over the source and observation surface. Examples are provided which illustrate the capabilities of the technique, including a preliminary modeling of potential field signatures for the Mississippi embayment crustal structure at 450 km.
NASA Astrophysics Data System (ADS)
Yüksel, Yusuf
2018-05-01
We propose an atomistic model and present Monte Carlo simulation results regarding the influence of FM/AF interface structure on the hysteresis mechanism and exchange bias behavior for a spin valve type FM/FM/AF magnetic junction. We simulate perfectly flat and roughened interface structures both with uncompensated interfacial AF moments. In order to simulate rough interface effect, we introduce the concept of random exchange anisotropy field induced at the interface, and acting on the interface AF spins. Our results yield that different types of the random field distributions of anisotropy field may lead to different behavior of exchange bias.
Pathwise upper semi-continuity of random pullback attractors along the time axis
NASA Astrophysics Data System (ADS)
Cui, Hongyong; Kloeden, Peter E.; Wu, Fuke
2018-07-01
The pullback attractor of a non-autonomous random dynamical system is a time-indexed family of random sets, typically having the form {At(ṡ) } t ∈ R with each At(ṡ) a random set. This paper is concerned with the nature of such time-dependence. It is shown that the upper semi-continuity of the mapping t ↦At(ω) for each ω fixed has an equivalence relationship with the uniform compactness of the local union ∪s∈IAs(ω) , where I ⊂ R is compact. Applied to a semi-linear degenerate parabolic equation with additive noise and a wave equation with multiplicative noise we show that, in order to prove the above locally uniform compactness and upper semi-continuity, no additional conditions are required, in which sense the two properties appear to be general properties satisfied by a large number of real models.
Storch, Eric A; Lewin, Adam B; Collier, Amanda B; Arnold, Elysse; De Nadai, Alessandro S; Dane, Brittney F; Nadeau, Joshua M; Mutch, P Jane; Murphy, Tanya K
2015-03-01
Examine the efficacy of a personalized, modular cognitive-behavioral therapy (CBT) protocol among early adolescents with high-functioning autism spectrum disorders (ASDs) and co-occurring anxiety relative to treatment as usual (TAU). Thirty-one children (11-16 years) with ASD and clinically significant anxiety were randomly assigned to receive 16 weekly CBT sessions or an equivalent duration of TAU. Participants were assessed by blinded raters at screening, posttreatment, and 1-month follow-up. Youth randomized to CBT demonstrated superior improvement across primary outcomes relative to those receiving TAU. Eleven of 16 adolescents randomized to CBT were treatment responders, versus 4 of 15 in the TAU condition. Gains were maintained at 1-month follow-up for CBT responders. These data extend findings of the promising effects of CBT in anxious youth with ASD to early adolescents. © 2014 Wiley Periodicals, Inc.
Random vibration analysis of space flight hardware using NASTRAN
NASA Technical Reports Server (NTRS)
Thampi, S. K.; Vidyasagar, S. N.
1990-01-01
During liftoff and ascent flight phases, the Space Transportation System (STS) and payloads are exposed to the random acoustic environment produced by engine exhaust plumes and aerodynamic disturbances. The analysis of payloads for randomly fluctuating loads is usually carried out using the Miles' relationship. This approximation technique computes an equivalent load factor as a function of the natural frequency of the structure, the power spectral density of the excitation, and the magnification factor at resonance. Due to the assumptions inherent in Miles' equation, random load factors are often over-estimated by this approach. In such cases, the estimates can be refined using alternate techniques such as time domain simulations or frequency domain spectral analysis. Described here is the use of NASTRAN to compute more realistic random load factors through spectral analysis. The procedure is illustrated using Spacelab Life Sciences (SLS-1) payloads and certain unique features of this problem are described. The solutions are compared with Miles' results in order to establish trends at over or under prediction.
A high speed implementation of the random decrement algorithm
NASA Technical Reports Server (NTRS)
Kiraly, L. J.
1982-01-01
The algorithm is useful for measuring net system damping levels in stochastic processes and for the development of equivalent linearized system response models. The algorithm works by summing together all subrecords which occur after predefined threshold level is crossed. The random decrement signature is normally developed by scanning stored data and adding subrecords together. The high speed implementation of the random decrement algorithm exploits the digital character of sampled data and uses fixed record lengths of 2(n) samples to greatly speed up the process. The contributions to the random decrement signature of each data point was calculated only once and in the same sequence as the data were taken. A hardware implementation of the algorithm using random logic is diagrammed and the process is shown to be limited only by the record size and the threshold crossing frequency of the sampled data. With a hardware cycle time of 200 ns and 1024 point signature, a threshold crossing frequency of 5000 Hertz can be processed and a stably averaged signature presented in real time.
NASA Astrophysics Data System (ADS)
Kapotis, Efstratios; Kalkanis, George
2016-10-01
According to the principle of equivalence, it is impossible to distinguish between gravity and inertial forces that a noninertial observer experiences in his own frame of reference. For example, let's consider an elevator in space that is being accelerated in one direction. An observer inside it would feel as if there was gravity force pulling him toward the opposite direction. The same holds for a person in a stationary elevator located in Earth's gravitational field. No experiment enables us to distinguish between the accelerating elevator in space and the motionless elevator near Earth's surface. Strictly speaking, when the gravitational field is non-uniform (like Earth's), the equivalence principle holds only for experiments in elevators that are small enough and that take place over a short enough period of time (Fig. 1). However, performing an experiment in an elevator in space is impractical. On the other hand, it is easy to combine both forces on the same observer, i.e., gravity and a fictitious inertial force due to acceleration. Imagine an observer in an elevator that falls freely within Earth's gravitational field. The observer experiences gravity pulling him down while it might be said that the inertial force due to gravity acceleration g pulls him up. Gravity and inertial force cancel each other, (mis)leading the observer to believe there is no gravitational field. This study outlines our implementation of a self-construction idea that we have found useful in teaching introductory physics students (undergraduate, non-majors).
Monitoring the eye lens: which dose quantity is adequate?
NASA Astrophysics Data System (ADS)
Behrens, R.; Dietze, G.
2010-07-01
Recent epidemiological studies suggest a rather low dose threshold (below 0.5 Gy) for the induction of a cataract of the eye lens. Some other studies even assume that there is no threshold at all. Therefore, protection measures have to be optimized and current dose limits for the eye lens may be reduced in the future. The question of which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens arises from this situation. While in many countries dosemeters calibrated in terms of the dose equivalent quantity Hp(0.07) have been seen as being adequate for monitoring the dose to the eye lens, this might be questionable in the case of reduced dose limits and, thus, it may become necessary to use the dose equivalent quantity Hp(3) for this purpose. To discuss this question, the dose conversion coefficients for the equivalent dose of the eye lens (in the following eye lens dose) were determined for realistic photon and beta radiation fields and compared with the values of the corresponding conversion coefficients for the different operational quantities. The values obtained lead to the following conclusions: in radiation fields where most of the dose comes from photons, especially x-rays, it is appropriate to use dosemeters calibrated in terms of Hp(0.07) on a slab phantom, while in other radiation fields (dominated by beta radiation or unknown contributions of photon and beta radiation) dosemeters calibrated in terms of Hp(3) on a slab phantom should be used. As an alternative, dosemeters calibrated in terms of Hp(0.07) on a slab phantom could also be used; however, in radiation fields containing beta radiation with the end point energy near 1 MeV, an overestimation of the eye lens dose by up to a factor of 550 is possible.
Monitoring the eye lens: which dose quantity is adequate?
Behrens, R; Dietze, G
2010-07-21
Recent epidemiological studies suggest a rather low dose threshold (below 0.5 Gy) for the induction of a cataract of the eye lens. Some other studies even assume that there is no threshold at all. Therefore, protection measures have to be optimized and current dose limits for the eye lens may be reduced in the future. The question of which personal dose equivalent quantity is appropriate for monitoring the dose to the eye lens arises from this situation. While in many countries dosemeters calibrated in terms of the dose equivalent quantity H(p)(0.07) have been seen as being adequate for monitoring the dose to the eye lens, this might be questionable in the case of reduced dose limits and, thus, it may become necessary to use the dose equivalent quantity H(p)(3) for this purpose. To discuss this question, the dose conversion coefficients for the equivalent dose of the eye lens (in the following eye lens dose) were determined for realistic photon and beta radiation fields and compared with the values of the corresponding conversion coefficients for the different operational quantities. The values obtained lead to the following conclusions: in radiation fields where most of the dose comes from photons, especially x-rays, it is appropriate to use dosemeters calibrated in terms of H(p)(0.07) on a slab phantom, while in other radiation fields (dominated by beta radiation or unknown contributions of photon and beta radiation) dosemeters calibrated in terms of H(p)(3) on a slab phantom should be used. As an alternative, dosemeters calibrated in terms of H(p)(0.07) on a slab phantom could also be used; however, in radiation fields containing beta radiation with the end point energy near 1 MeV, an overestimation of the eye lens dose by up to a factor of 550 is possible.
Summer School Effects in a Randomized Field Trial
ERIC Educational Resources Information Center
Zvoch, Keith; Stevens, Joseph J.
2013-01-01
This field-based randomized trial examined the effect of assignment to and participation in summer school for two moderately at-risk samples of struggling readers. Application of multiple regression models to difference scores capturing the change in summer reading fluency revealed that kindergarten students randomly assigned to summer school…
Spin dynamics of random Ising chain in coexisting transverse and longitudinal magnetic fields
NASA Astrophysics Data System (ADS)
Liu, Zhong-Qiang; Jiang, Su-Rong; Kong, Xiang-Mu; Xu, Yu-Liang
2017-05-01
The dynamics of the random Ising spin chain in coexisting transverse and longitudinal magnetic fields is studied by the recursion method. Both the spin autocorrelation function and its spectral density are investigated by numerical calculations. It is found that system's dynamical behaviors depend on the deviation σJ of the random exchange coupling between nearest-neighbor spins and the ratio rlt of the longitudinal and the transverse fields: (i) For rlt = 0, the system undergoes two crossovers from N independent spins precessing about the transverse magnetic field to a collective-mode behavior, and then to a central-peak behavior as σJ increases. (ii) For rlt ≠ 0, the system may exhibit a coexistence behavior of a collective-mode one and a central-peak one. When σJ is small (or large enough), system undergoes a crossover from a coexistence behavior (or a disordered behavior) to a central-peak behavior as rlt increases. (iii) Increasing σJ depresses effects of both the transverse and the longitudinal magnetic fields. (iv) Quantum random Ising chain in coexisting magnetic fields may exhibit under-damping and critical-damping characteristics simultaneously. These results indicate that changing the external magnetic fields may control and manipulate the dynamics of the random Ising chain.
Investigation of Workplace-like Calibration Fields via a Deuterium-Tritium (D-T) Neutron Generator.
Mozhayev, Andrey V; Piper, Roman K; Rathbone, Bruce A; McDonald, Joseph C
2017-04-01
Radiation survey meters and personal dosimeters are typically calibrated in reference neutron fields based on conventional radionuclide sources, such as americium-beryllium (Am-Be) or californium-252 (Cf), either unmodified or heavy-water moderated. However, these calibration neutron fields differ significantly from the workplace fields in which most of these survey meters and dosimeters are being used. Although some detectors are designed to yield an approximately dose-equivalent response over a particular neutron energy range, the response of other detectors is highly dependent upon neutron energy. This, in turn, can result in significant over- or underestimation of the intensity of neutron radiation and/or personal dose equivalent determined in the work environment. The use of simulated workplace neutron calibration fields that more closely match those present at the workplace could improve the accuracy of worker, and workplace, neutron dose assessment. This work provides an overview of the neutron fields found around nuclear power reactors and interim spent fuel storage installations based on available data. The feasibility of producing workplace-like calibration fields in an existing calibration facility has been investigated via Monte Carlo simulations. Several moderating assembly configurations, paired with a neutron generator using the deuterium tritium (D-T) fusion reaction, were explored.
The Hard but Necessary Task of Gathering Order-One Effect Size Indices in Meta-Analysis
ERIC Educational Resources Information Center
Ortego, Carmen; Botella, Juan
2010-01-01
Meta-analysis of studies with two groups and two measurement occasions must employ order-one effect size indices to represent study outcomes. Especially with non-random assignment, non-equivalent control group designs, a statistical analysis restricted to post-treatment scores can lead to severely biased conclusions. The 109 primary studies…
ERIC Educational Resources Information Center
Changeiywo, Johnson M.; Wambugu, P. W.; Wachanga, S. W.
2011-01-01
Teaching method is a major factor that affects students' motivation to learn physics. This study investigated the effects of using mastery learning approach (MLA) on secondary school students' motivation to learn physics. Solomon four non-equivalent control group design under the quasi-experimental research method was used in which a random sample…
ERIC Educational Resources Information Center
Olaniyan, Ademola Olatide; Omosewo, Esther O.; Nwankwo, Levi I.
2015-01-01
This study was designed to investigate the Effect of Polya Problem-Solving Model on Senior School Students' Performance in Current Electricity. It was a quasi experimental study of non- randomized, non equivalent pre-test post-test control group design. Three research questions were answered and corresponding three research hypotheses were tested…
ERIC Educational Resources Information Center
Owusu, K. A.; Monney, K. A.; Appiah, J. Y.; Wilmot, E. M.
2010-01-01
This study investigated the comparative efficiency of computer-assisted instruction (CAI) and conventional teaching method in biology on senior high school students. A science class was selected in each of two randomly selected schools. The pretest-posttest non equivalent quasi experimental design was used. The students in the experimental group…
ERIC Educational Resources Information Center
Glazerman, Steven; Protik, Ali; Teh, Bing-ru; Bruch, Julie; Seftor, Neil
2012-01-01
This report describes the implementation and intermediate impacts of an intervention designed to provide incentives for a school district's highest-performing teachers to work in its lowest-achieving schools. The report is part of a larger study in which random assignment was used to form two equivalent groups of classrooms organized into teacher…
Ethnocentrism and Cultural Relativism in Children's Thinking About Foreign Values and Attitudes.
ERIC Educational Resources Information Center
McKenzie, Gary R.
In the study reported here, 133 subjects (Ss) were selected randomly from one elementary school. Ss were showed photographs of Bushmen performing daily activities and asked to predict whether a Bushman would prefer specific indigenous customs or their American equivalents, and then to justify the prediction. Scores for three types of predictions…
Effect of Feedback and Training on Utility Usage among Adolescent Delinquents.
ERIC Educational Resources Information Center
Sexton, Richard E.; And Others
The usefulness of providing specific information and a progress/feedback mechanism to control utility usage in community-based, halfway houses for dependent-neglected and for delinquent adolescents was explored. The investigation was carried out in a random sample of 12 houses of an Arizona facility, divided into equivalent groups of three houses.…
ERIC Educational Resources Information Center
Fournier, Jay C.; DeRubeis, Robert J.; Shelton, Richard C.; Hollon, Steven D.; Amsterdam, Jay D.; Gallop, Robert
2009-01-01
A recent randomized controlled trial found nearly equivalent response rates for antidepressant medications and cognitive therapy in a sample of moderate to severely depressed outpatients. In this article, the authors seek to identify the variables that were associated with response across both treatments as well as variables that predicted…
ERIC Educational Resources Information Center
Wang, Shudong; Wang, Ning; Hoadley, David
This study examined the comparability of scores on the National Nurses Aides Assessment Program (NNAAP) test across language and administration condition groups for calibration and validation samples that were randomly drawn from the same population. A sample of 20,568 candidate responses to 1 test form was used. This examination is given in…
Paper or Plastic? Data Equivalence in Paper and Electronic Diaries
ERIC Educational Resources Information Center
Green, Amie S.; Rafaeli, Eshkol; Bolger, Niall; Shrout, Patrick E.; Reis, Harry T.
2006-01-01
Concern has been raised about the lack of participant compliance in diary studies that use paper-and-pencil as opposed to electronic formats. Three studies explored the magnitude of compliance problems and their effects on data quality. Study 1 used random signals to elicit diary reports and found close matches to self-reported completion times,…
7 CFR 987.45 - Withholding restricted dates.
Code of Federal Regulations, 2010 CFR
2010-01-01
... purchase on the open market a volume of dates equivalent to the deferred obligation. Such bonding rate..., with the approval of the Secretary, minimum standards for inspection of field-run dates and appropriate..., satisfy all or any part of his obligation to withhold restricted dates by setting aside field-run dates or...
49 CFR Appendix E to Part 40 - SAP Equivalency Requirements for Certification Organizations
Code of Federal Regulations, 2011 CFR
2011-10-01
... formal education, in-service training, and professional development courses. Part of any professional counselor's development is participation in formal and non-formal education opportunities within the field... is important if the individual is to be considered a professional in the field of alcohol and drug...
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-01-01
Abstract To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches – for example, analysis of variance (ANOVA) – are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing. PMID:24567836
Semenov, Alexander V; Elsas, Jan Dirk; Glandorf, Debora C M; Schilthuizen, Menno; Boer, Willem F
2013-08-01
To fulfill existing guidelines, applicants that aim to place their genetically modified (GM) insect-resistant crop plants on the market are required to provide data from field experiments that address the potential impacts of the GM plants on nontarget organisms (NTO's). Such data may be based on varied experimental designs. The recent EFSA guidance document for environmental risk assessment (2010) does not provide clear and structured suggestions that address the statistics of field trials on effects on NTO's. This review examines existing practices in GM plant field testing such as the way of randomization, replication, and pseudoreplication. Emphasis is placed on the importance of design features used for the field trials in which effects on NTO's are assessed. The importance of statistical power and the positive and negative aspects of various statistical models are discussed. Equivalence and difference testing are compared, and the importance of checking the distribution of experimental data is stressed to decide on the selection of the proper statistical model. While for continuous data (e.g., pH and temperature) classical statistical approaches - for example, analysis of variance (ANOVA) - are appropriate, for discontinuous data (counts) only generalized linear models (GLM) are shown to be efficient. There is no golden rule as to which statistical test is the most appropriate for any experimental situation. In particular, in experiments in which block designs are used and covariates play a role GLMs should be used. Generic advice is offered that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in this testing. The combination of decision trees and a checklist for field trials, which are provided, will help in the interpretation of the statistical analyses of field trials and to assess whether such analyses were correctly applied. We offer generic advice to risk assessors and applicants that will help in both the setting up of field testing and the interpretation and data analysis of the data obtained in field testing.
Cosmic Rays in Intermittent Magnetic Fields
DOE Office of Scientific and Technical Information (OSTI.GOV)
Shukurov, Anvar; Seta, Amit; Bushby, Paul J.
The propagation of cosmic rays in turbulent magnetic fields is a diffusive process driven by the scattering of the charged particles by random magnetic fluctuations. Such fields are usually highly intermittent, consisting of intense magnetic filaments and ribbons surrounded by weaker, unstructured fluctuations. Studies of cosmic-ray propagation have largely overlooked intermittency, instead adopting Gaussian random magnetic fields. Using test particle simulations, we calculate cosmic-ray diffusivity in intermittent, dynamo-generated magnetic fields. The results are compared with those obtained from non-intermittent magnetic fields having identical power spectra. The presence of magnetic intermittency significantly enhances cosmic-ray diffusion over a wide range of particlemore » energies. We demonstrate that the results can be interpreted in terms of a correlated random walk.« less
Introduction to the Special Issue.
ERIC Educational Resources Information Center
Petrosino, Anthony
2003-01-01
Introduces the articles of this special issue focusing on randomized field trials in criminology. In spite of the overall lack of randomized field trials in criminology, some agencies and individuals are able to mount an impressive number of field trials, and these articles focus on their experiences. (SLD)
Codes over infinite family of rings : Equivalence and invariant ring
NASA Astrophysics Data System (ADS)
Irwansyah, Muchtadi-Alamsyah, Intan; Muchlis, Ahmad; Barra, Aleams; Suprijanto, Djoko
2016-02-01
In this paper, we study codes over the ring Bk=𝔽pr[v1,…,vk]/(vi2=vi,∀i =1 ,…,k ) . For instance, we focus on two topics, i.e. characterization of the equivalent condition between two codes over Bk using a Gray map into codes over finite field 𝔽pr, and finding generators for invariant ring of Hamming weight enumerator for Euclidean self-dual codes over Bk.
Bistability in dual-frequency nematic liquid crystals
NASA Astrophysics Data System (ADS)
Palto, S. P.; Barnik, M. I.
2007-03-01
Different modes of bistable switching in liquid crystals with frequency inversion of the dielectric anisotropy sign are discussed. The study is performed by numerical simulation and experimentally. It is shown that dual frequency driving can be effectively used to control switching between topologically equivalent and non-equivalent director field distributions. The experimental results on temperature performance of the dual-frequency switching and possible driving methods for energy consumption and expanding the temperature range are presented.
Aspects of AdS/CFT: Conformal Deformations and the Goldstone Equivalence Theorem
NASA Astrophysics Data System (ADS)
Cantrell, Sean Andrew
The AdS/CFT correspondence provides a map from the states of theories situated in AdSd+1 to those in dual conformal theories in a d-dimensional space. The correspondence can be used to establish certain universal properties of some theories in one space by examining the behave of general objects in the other. In this thesis, we develop various formal aspects of AdS/CFT. Conformal deformations manifest in the AdS/CFT correspondence as boundary conditions on the AdS field. Heretofore, double-trace deformations have been the primary focus in this context. To better understand multitrace deformations, we revisit the relationship between the generating AdS partition function for a free bulk theory and the boundary CFT partition function subject to arbitrary conformal deformations. The procedure leads us to a formalism that constructs bulk fields from boundary operators. We independently replicate the holographic RG flow narrative to go on to interpret the brane used to regulate the AdS theory as a renormalization scale. The scale-dependence of the dilatation spectrum of a boundary theory in the presence of general deformations can be thus understood on the AdS side using this formalism. The Goldstone equivalence theorem allows one to relate scattering amplitudes of massive gauge fields to those of scalar fields in the limit of large scattering energies. We generalize this theorem under the framework of the AdS/CFT correspondence. First, we obtain an expression of the equivalence theorem in terms of correlation functions of creation and annihilation operators by using an AdS wave function approach to the AdS/CFT dictionary. It is shown that the divergence of the non-conserved conformal current dual to the bulk gauge field is approximately primary when computing correlators for theories in which the masses of all the exchanged particles are sufficiently large. The results are then generalized to higher spin fields. We then go on to generalize the theorem using conformal blocks in two and four-dimensional CFTs. We show that when the scaling dimensions of the exchanged operators are large compared to both their spins and the dimension of the current, the conformal blocks satisfy an equivalence theorem.
Altenburg, Wytske A; ten Hacken, Nick H T; Bossenbroek, Linda; Kerstjens, Huib A M; de Greef, Mathieu H G; Wempe, Johan B
2015-01-01
We were interested in the effects of a physical activity (PA) counselling programme in three groups of COPD patients from general practice (primary care), outpatient clinic (secondary care) and pulmonary rehabilitation (PR). In this randomized controlled trial 155 COPD patients, 102 males, median (IQR) age 62 (54-69) y, FEV1predicted 60 (40-75) % were assigned to a 12-weeks' physical activity counselling programme or usual care. Physical activity (pedometer (Yamax SW200) and metabolic equivalents), exercise capacity (6-min walking distance) and quality of life (Chronic Respiratory Questionnaire and Clinical COPD Questionnaire) were assessed at baseline, after three and 15 months. A significant difference between the counselling and usual care group in daily steps (803 steps, p = 0.001) and daily physical activity (2214 steps + equivalents, p = 0.001)) from 0 to 3 months was found in the total group, as well as in the outpatient (1816 steps, 2616 steps + equivalents, both p = 0.007) and PR (758 steps, 2151 steps + equivalents, both p = 0.03) subgroups. From 0 to 15 months no differences were found in physical activity. However, when patients with baseline physical activity>10,000 steps per day (n = 8), who are already sufficiently active, were excluded, a significant long-term effect of the counselling programme on daily physical activity existed in the total group (p = 0.02). Differences in exercise capacity and quality of life were found only from 0 to 3 months, in the outpatient subgroup. Our PA counselling programme effectively enhances PA level in COPD patients after three months. Sedentary patients at baseline still benefit after 15 months. ClinicalTrials.gov: registration number NCT00614796. Copyright © 2014. Published by Elsevier Ltd.
Redfern, Julie; Adedoyin, Rufus Adesoji; Ofori, Sandra; Anchala, Raghupathy; Ajay, Vamadevan S; De Andrade, Luciano; Zelaya, Jose; Kaur, Harparkash; Balabanova, Dina; Sani, Mahmoud U
2016-01-01
Background Prevention and optimal management of hypertension in the general population is paramount to the achievement of the World Heart Federation (WHF) goal of reducing premature cardiovascular disease (CVD) mortality by 25% by the year 2025 and widespread access to good quality antihypertensive medicines is a critical component for achieving the goal. Despite research and evidence relating to other medicines such as antimalarials and antibiotics, there is very little known about the quality of generic antihypertensive medicines in low-income and middle-income countries. The aim of this study was to determine the physicochemical equivalence (percentage of active pharmaceutical ingredient, API) of generic antihypertensive medicines available in the retail market of a developing country. Methods An observational design will be adopted, which includes literature search, landscape assessment, collection and analysis of medicine samples. To determine physicochemical equivalence, a multistage sampling process will be used, including (1) identification of the 2 most commonly prescribed classes of antihypertensive medicines prescribed in Nigeria; (2) identification of a random sample of 10 generics from within each of the 2 most commonly prescribed classes; (3) a geographical representative sampling process to identify a random sample of 24 retail outlets in Nigeria; (4) representative sample purchasing, processing to assess the quality of medicines, storage and transport; and (5) assessment of the physical and chemical equivalence of the collected samples compared to the API in the relevant class. In total, 20 samples from each of 24 pharmacies will be tested (total of 480 samples). Discussion Availability of and access to quality antihypertensive medicines globally is therefore a vital strategy needed to achieve the WHF 25×25 targets. However, there is currently a scarcity of knowledge about the quality of antihypertensive medicines available in developing countries. Such information is important for enforcing and for ensuring the quality of antihypertensive medicines. PMID:28588941
Shin, Donghoon; Kim, Youngdoe; Kang, Jungwon; Gauliard, Anke; Fuhr, Rainard
2016-01-01
Aims SB4 has been developed as a biosimilar of etanercept. The primary objective of the present study was to demonstrate the pharmacokinetic (PK) equivalence between SB4 and European Union ‐sourced etanercept (EU‐ETN), SB4 and United States‐sourced etanercept (US‐ETN), and EU‐ETN and US‐ETN. The safety and immunogenicity were also compared between the treatments. Methods This was a single‐blind, three‐part, crossover study in 138 healthy male subjects. In each part, 46 subjects were randomized at a 1:1 ratio to receive a single 50 mg subcutaneous dose of the treatments (part A: SB4 or EU‐ETN; part B: SB4 or US‐ETN; and part C: EU‐ETN or US‐ETN) in period 1, followed by the crossover treatment in period 2 according to their assigned sequences. PK equivalence between the treatments was determined using the standard equivalence margin of 80–125%. Results The geometric least squares means ratios of AUCinf, AUClast and Cmax were 99.04%, 98.62% and 103.71% (part A: SB4 vs. EU‐ETN); 101.09%, 100.96% and 104.36% (part B: SB4 vs. US‐ETN); and 100.51%, 101.27% and 103.29% (part C: EU‐ETN vs. US‐ETN), respectively, and the corresponding 90% confidence intervals were completely contained within the limits of 80–125 %. The incidence of treatment‐emergent adverse events was comparable, and the incidence of the antidrug antibodies was lower in SB4 compared with the reference products. Conclusions The present study demonstrated PK equivalence between SB4 and EU‐ETN, SB4 and US‐ETN, and EU‐ETN and US‐ETN in healthy male subjects. SB4 was well tolerated, with a lower immunogenicity profile and similar safety profile compared with those of the reference products. PMID:26972584
Generalized master equation via aging continuous-time random walks.
Allegrini, Paolo; Aquino, Gerardo; Grigolini, Paolo; Palatella, Luigi; Rosa, Angelo
2003-11-01
We discuss the problem of the equivalence between continuous-time random walk (CTRW) and generalized master equation (GME). The walker, making instantaneous jumps from one site of the lattice to another, resides in each site for extended times. The sojourn times have a distribution density psi(t) that is assumed to be an inverse power law with the power index micro. We assume that the Onsager principle is fulfilled, and we use this assumption to establish a complete equivalence between GME and the Montroll-Weiss CTRW. We prove that this equivalence is confined to the case where psi(t) is an exponential. We argue that is so because the Montroll-Weiss CTRW, as recently proved by Barkai [E. Barkai, Phys. Rev. Lett. 90, 104101 (2003)], is nonstationary, thereby implying aging, while the Onsager principle is valid only in the case of fully aged systems. The case of a Poisson distribution of sojourn times is the only one with no aging associated to it, and consequently with no need to establish special initial conditions to fulfill the Onsager principle. We consider the case of a dichotomous fluctuation, and we prove that the Onsager principle is fulfilled for any form of regression to equilibrium provided that the stationary condition holds true. We set the stationary condition on both the CTRW and the GME, thereby creating a condition of total equivalence, regardless of the nature of the waiting-time distribution. As a consequence of this procedure we create a GME that is a bona fide master equation, in spite of being non-Markov. We note that the memory kernel of the GME affords information on the interaction between system of interest and its bath. The Poisson case yields a bath with infinitely fast fluctuations. We argue that departing from the Poisson form has the effect of creating a condition of infinite memory and that these results might be useful to shed light on the problem of how to unravel non-Markov quantum master equations.
Dark adaptation of toad rod photoreceptors following small bleaches.
Leibrock, C S; Reuter, T; Lamb, T D
1994-11-01
The recovery of toad rod photoreceptors, following exposure to intense lights that bleached 0.02-3% of the rhodopsin, has been investigated using the suction pipette technique. The post-bleach period was accompanied by reduced flash sensitivity, accelerated kinetics, and spontaneous fluctuations (noise). The power spectrum of the fluctuations had substantially the form expected for the random occurrence of single-photon events, and the noise could therefore be expressed as a "photon-noise equivalent intensity". From the level of desensitization at any time, the after-effect of the bleach could also be expressed in terms of a "desensitization-equivalent intensity", and this was found to be at least a factor of 20 times higher than the noise-equivalent intensity at the corresponding time. Our results indicate that a bleach induces two closely-related phenomena: (a) a process indistinguishable from the effect of real light, and (b) another process which desensitizes and accelerates the response in the same way that light does, but without causing photon-like noise. We propose a mechanism underlying these processes.
Gavin, Timothy P; Van Meter, Jessica B; Brophy, Patricia M; Dubis, Gabriel S; Potts, Katlin N; Hickner, Robert C
2012-02-01
It has been proposed that field-based tests (FT) used to estimate functional threshold power (FTP) result in power output (PO) equivalent to PO at lactate threshold (LT). However, anecdotal evidence from regional cycling teams tested for LT in our laboratory suggested that PO at LT underestimated FTP. It was hypothesized that estimated FTP is not equivalent to PO at LT. The LT and estimated FTP were measured in 7 trained male competitive cyclists (VO2max = 65.3 ± 1.6 ml O2·kg(-1)·min(-1)). The FTP was estimated from an 8-minute FT and compared with PO at LT using 2 methods; LT(Δ1), a 1 mmol·L(-1) or greater rise in blood lactate in response to an increase in workload and LT(4.0), blood lactate of 4.0 mmol·L(-1). The estimated FTP was equivalent to PO at LT(4.0) and greater than PO at LT(Δ1). VO2max explained 93% of the variance in individual PO during the 8-minute FT. When the 8-minute FT PO was expressed relative to maximal PO from the VO2max test (individual exercise performance), VO2max explained 64% of the variance in individual exercise performance. The PO at LT was not related to 8-minute FT PO. In conclusion, FTP estimated from an 8-minute FT is equivalent to PO at LT if LT(4.0) is used but is not equivalent for all methods of LT determination including LT(Δ1).
Scott, Andrew; Kotecha, Aachal; Bunce, Catey; Balidis, Miltos; Garway-Heath, David F; Miller, Michael H; Wormald, Richard
2011-03-01
To test the hypothesis that neodymium:yttrium-aluminum-garnet (Nd:YAG) laser peripheral iridotomy (LPI) significantly reduces the incidence of conversion from pigment dispersion syndrome (PDS) with ocular hypertension (OHT) to pigmentary glaucoma (PG). Prospective, randomized, controlled 3-year trial. One hundred sixteen eyes of 116 patients with PDS and OHT. Patients were assigned randomly either to Nd:YAG LPI or to a control group (no laser). The primary outcome measure was conversion to PG within 3 years, based on full-threshold visual field (VF) analysis using the Ocular Hypertension Treatment Study criteria. Secondary outcome measures were whether eyes required topical antiglaucoma medications during the study period and the time to conversion or medication. Fifty-seven patients were randomized to undergo laser treatment and 59 were randomized to no laser (controls). Age, gender, spherical equivalent refraction, and intraocular pressure at baseline were similar between groups. Outcome data were available for 105 (90%) of recruited subjects, 52 in the laser treatment group and 53 in the no laser treatment group. Patients were followed up for a median of 35.9 months (range, 10-36 months) in the laser arm and 35.9 months (range, 1-36 months) in the control arm. Eight eyes (15%) in the laser group and 3 eyes (6%) in the control group converted to glaucoma in the study period. The proportion of eyes started on medical treatment was similar in the 2 groups: 8 eyes (15%) in the laser group and 9 eyes (17%) in the control group. Survival analyses showed no evidence of any difference in time to VF progression or commencement of topical therapy between the 2 groups. Cataract extraction was performed on 1 patient in the laser group and in 1 patient in the control group during the study period (laser eye at 18 months; control eye at 34 months). This study suggests that there was no benefit of Nd:YAG LPI in preventing progression from PDS with OHT to PG within 3 years of follow-up. Copyright © 2011 American Academy of Ophthalmology. Published by Elsevier Inc. All rights reserved.
Development of a Body Shield for Small Animal PET System to Reduce Random and Scatter Coincidences
NASA Astrophysics Data System (ADS)
Wada, Yasuhiro; Yamamoto, Seiichi; Watanabe, Yasuyoshi
2015-02-01
For small animal positron emission tomography (PET) research using high radioactivity, such as dynamic studies, the resulting high random coincidence rate of the system degrades image quality. The random coincidence rate is increased not only by the gamma photons from inside the axial-field-of-view (axial-FOV) of the PET system but also by those from outside the axial-FOV. For brain imaging in small animal studies, significant interference is observed from gamma photons emitted from the body. Single gamma photons from the body enter the axial-FOV and increase the random and scatter coincidences. Shielding against the gamma photons from outside the axial-FOV would improve the image quality. For this purpose, we developed a body shield for a small animal PET system, the microPET Primate 4-ring system, and evaluated its performance. The body shield is made of 9-mm-thick lead and it surrounds most of a rat's body. We evaluated the effectiveness of the body shield using a head phantom and a body phantom with a radioactivity concentration ratio of 1:2 and a maximum total activity of approximately 250 MBq. The random coincidence rate was dramatically decreased to 1/10, and the noise equivalent count rate (NECR) was increased 6 times with an activity of 7 MBq in the head phantom. The true count rate was increased to 35% due to the decrease in system deadtime. The average scatter fraction was decreased to 1/2.5 with the body shield. Count rate measurements of rat were also conducted with an injection activity of approximately 25 MBq of [C-11]N,N-dimethyl-2-(2-amino-4-cyanophenylthio) benzylamine ([C-11]DASB) and approximately 70 and 310 MBq of 2-deoxy-2-(F-18)fluoro-D-glucose ([F-18]FDG). Using the body shield, [F-18]FDG images of rats were improved by increasing the amount of radioactivity injected. The body shield designed for small animal PET systems is a promising tool for improving image quality and quantitation accuracy in small animal molecular imaging research.
Mechanistic equivalent circuit modelling of a commercial polymer electrolyte membrane fuel cell
NASA Astrophysics Data System (ADS)
Giner-Sanz, J. J.; Ortega, E. M.; Pérez-Herranz, V.
2018-03-01
Electrochemical impedance spectroscopy (EIS) has been widely used in the fuel cell field since it allows deconvolving the different physic-chemical processes that affect the fuel cell performance. Typically, EIS spectra are modelled using electric equivalent circuits. In this work, EIS spectra of an individual cell of a commercial PEM fuel cell stack were obtained experimentally. The goal was to obtain a mechanistic electric equivalent circuit in order to model the experimental EIS spectra. A mechanistic electric equivalent circuit is a semiempirical modelling technique which is based on obtaining an equivalent circuit that does not only correctly fit the experimental spectra, but which elements have a mechanistic physical meaning. In order to obtain the aforementioned electric equivalent circuit, 12 different models with defined physical meanings were proposed. These equivalent circuits were fitted to the obtained EIS spectra. A 2 step selection process was performed. In the first step, a group of 4 circuits were preselected out of the initial list of 12, based on general fitting indicators as the determination coefficient and the fitted parameter uncertainty. In the second step, one of the 4 preselected circuits was selected on account of the consistency of the fitted parameter values with the physical meaning of each parameter.
Experimental investigation on ignition schemes of partially covered cavities in a supersonic flow
NASA Astrophysics Data System (ADS)
Cai, Zun; Sun, Mingbo; Wang, Hongbo; Wang, Zhenguo
2016-04-01
In this study, ignition schemes of the partially covered cavity in a scramjet combustor were investigated under inflow conditions of Ma=2.1 with stagnation pressure P0=0.7 Mpa and stagnation temperature T0=947 K. It reveals that the ignition scheme of the partially covered cavity has a great impact on the ignition and flame stabilization process. There always exists an optimized global equivalence ratio of a fixed ignition scheme, and the optimized global equivalence ratio of ignition in the partially covered cavity is lower than that of the uncovered cavity. For tandem dual-cavities, ignition in the partially covered cavity could be enhanced with the optimization of global equivalence ratio. However, ignition in the partially covered cavity would be exacerbated with further increasing the global equivalence ratio. The global equivalence ratio and the jet penetration height have a strong coupling with the combustion flow-field. For multi-cavities, it is assured that fuel injection on the opposite side could hardly be ignited after ignition in the partially covered cavity even with the optimized global equivalence ratio. It is possible to realize ignition enhancement in the partially covered cavity with the optimization of global equivalence ratio, but it is not beneficial for thrust increment during the steady combustion process.
Bourke, Levi; Blaikie, Richard J
2017-12-01
Dielectric waveguide resonant underlayers are employed in ultra-high NA interference photolithography to effectively double the depth of field. Generally a single high refractive index waveguiding layer is employed. Here multilayer Herpin effective medium methods are explored to develop equivalent multilayer waveguiding layers. Herpin equivalent resonant underlayers are shown to be suitable replacements provided at least one layer within the Herpin trilayer supports propagating fields. In addition, a method of increasing the intensity incident upon the photoresist using resonant overlayers is also developed. This method is shown to greatly enhance the intensity within the photoresist making the use of thicker, safer, non-absorbing, low refractive index matching liquids potentially suitable for large-scale applications.
Energy Flux Positivity and Unitarity in Conformal Field Theories
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kulaxizi, Manuela; Parnachev, Andrei
2011-01-07
We show that in most conformal field theories the condition of the energy flux positivity, proposed by Hofman and Maldacena, is equivalent to the absence of ghosts. At finite temperature and large energy and momenta, the two-point functions of the stress energy tensor develop light like poles. The residues of the poles can be computed, as long as the only spin-two conserved current, which appears in the stress energy tensor operator-product expansion and acquires a nonvanishing expectation value at finite temperature, is the stress energy tensor. The condition for the residues to stay positive and the theory to remain ghost-freemore » is equivalent to the condition of positivity of energy flux.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Link, M.H.; Hall, B.R.
1989-03-01
Thirty-five turbidite sandstone bodies from the Moco T and Webster reservoir zones were delineated for enhanced oil recovery projects in Mobil's MOCO FEE property, south Midway-Sunset field. The recognition of these sand bodies is based on mappable geometries determined from wireline log correlations, log character, core facies, reservoir characteristics, and comparison to nearby age-equivalent outcrops. These turbidite sands are composed of unconsolidated arkosic late Miocene sandstones (Stevens equivalent, Monterey Formation). They were deposited normal to paleoslope and trend southwest-northeast in an intraslope basin. Reservoir quality in the sandstone is very good, with average porosities of 33% and permeabilities of 1more » darcy.« less