NASA Astrophysics Data System (ADS)
Cardona, Javier Fernando; García Bonilla, Alba Carolina; Tomás García, Rogelio
2017-11-01
This article shows that the effect of all quadrupole errors present in an interaction region with low β * can be modeled by an equivalent magnetic kick, which can be estimated from action and phase jumps found on beam position data. This equivalent kick is used to find the strengths that certain normal and skew quadrupoles located on the IR must have to make an effective correction in that region. Additionally, averaging techniques to reduce noise on beam position data, which allows precise estimates of equivalent kicks, are presented and mathematically justified. The complete procedure is tested with simulated data obtained from madx and 2015-LHC experimental data. The analyses performed in the experimental data indicate that the strengths of the IR skew quadrupole correctors and normal quadrupole correctors can be estimated within a 10% uncertainty. Finally, the effect of IR corrections in the β* is studied, and a correction scheme that returns this parameter to its designed value is proposed.
NASA Astrophysics Data System (ADS)
Dowell, David H.; Zhou, Feng; Schmerge, John
2018-01-01
Weak, rotated magnetic and radio frequency quadrupole fields in electron guns and injectors can couple the beam's horizontal with vertical motion, introduce correlations between otherwise orthogonal transverse momenta, and reduce the beam brightness. This paper discusses two important sources of coupled transverse dynamics common to most electron injectors. The first is quadrupole focusing followed by beam rotation in a solenoid, and the second coupling comes from a skewed high-power rf coupler or cavity port which has a rotated rf quadrupole field. It is shown that a dc quadrupole field can correct for both types of couplings and exactly cancel their emittance growths. The degree of cancellation of the rf skew quadrupole emittance is limited by the electron bunch length. Analytic expressions are derived and compared with emittance simulations and measurements.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dowell, David H.; Zhou, Feng; Schmerge, John
Weak, rotated magnetic and radio frequency quadrupole fields in electron guns and injectors can couple the beam’s horizontal with vertical motion, introduce correlations between otherwise orthogonal transverse momenta, and reduce the beam brightness. This paper discusses two important sources of coupled transverse dynamics common to most electron injectors. The first is quadrupole focusing followed by beam rotation in a solenoid, and the second coupling comes from a skewed high-power rf coupler or cavity port which has a rotated rf quadrupole field. It is shown that a dc quadrupole field can correct for both types of couplings and exactly cancel theirmore » emittance growths. The degree of cancellation of the rf skew quadrupole emittance is limited by the electron bunch length. Analytic expressions are derived and compared with emittance simulations and measurements.« less
Dowell, David H.; Zhou, Feng; Schmerge, John
2018-01-17
Weak, rotated magnetic and radio frequency quadrupole fields in electron guns and injectors can couple the beam’s horizontal with vertical motion, introduce correlations between otherwise orthogonal transverse momenta, and reduce the beam brightness. This paper discusses two important sources of coupled transverse dynamics common to most electron injectors. The first is quadrupole focusing followed by beam rotation in a solenoid, and the second coupling comes from a skewed high-power rf coupler or cavity port which has a rotated rf quadrupole field. It is shown that a dc quadrupole field can correct for both types of couplings and exactly cancel theirmore » emittance growths. The degree of cancellation of the rf skew quadrupole emittance is limited by the electron bunch length. Analytic expressions are derived and compared with emittance simulations and measurements.« less
Coupling Correction and Beam Dynamics at Ultralow Vertical Emittance in the ALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steier, Christoph; Robin, D.; Wolski, A.
2008-03-17
For synchrotron light sources and for damping rings of linear colliders it is important to be able to minimize the vertical emittance and to correct the spurious vertical dispersion. This allows one to maximize the brightness and/or the luminosity. A commonly used tool to measure the skew error distribution is the analysis of orbit response matrices using codes like LOCO. Using the new Matlab version of LOCO and 18 newly installed power supplies for individual skew quadrupoles at the ALS the emittance ratio could be reduced below 0.1% at 1.9 GeV yielding a vertical emittance of about 5 pm. Atmore » those very low emittances, additional effects like intra beam scattering become more important, potentially limiting the minimum emittance for machine like the damping rings of linear colliders.« less
ONLINE MINIMIZATION OF VERTICAL BEAM SIZES AT APS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sun, Yipeng
In this paper, online minimization of vertical beam sizes along the APS (Advanced Photon Source) storage ring is presented. A genetic algorithm (GA) was developed and employed for the online optimization in the APS storage ring. A total of 59 families of skew quadrupole magnets were employed as knobs to adjust the coupling and the vertical dispersion in the APS storage ring. Starting from initially zero current skew quadrupoles, small vertical beam sizes along the APS storage ring were achieved in a short optimization time of one hour. The optimization results from this method are briefly compared with the onemore » from LOCO (Linear Optics from Closed Orbits) response matrix correction.« less
Decoupling correction system in RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Trbojevic, D.; Tepikian, S.; Peggs, S.
A global linear decoupling in the Relativistic Heavy Ion Collider (RHIC) is going to be performed with the three families of skew quadrupoles. The operating horizontal and vertical betatron tunes in the RHIC will be separated by one unit [nu][sub x]=28.19 and [nu][sub y]=29.18. The linear coupling is corrected by minimizing the tune splitting [Delta][nu]-the off diagonal matrix [bold m] (defined by Edwards and Teng). The skew quadrupole correction system is located close to each of the six interaction regions. A detail study of the system is presented by the use of the TEAPOT accelerator physics code. [copyright] 1994 Americanmore » Institute of Physics« less
Coupling control and optimization at the Canadian Light Source
NASA Astrophysics Data System (ADS)
Wurtz, W. A.
2018-06-01
We present a detailed study using the skew quadrupoles in the Canadian Light Source storage ring lattice to control the parameters of a coupled lattice. We calculate the six-dimensional beam envelop matrix and use it to produce a variety of objective functions for optimization using the Multi-Objective Particle Swarm Optimization (MOPSO) algorithm. MOPSO produces a number of skew quadrupole configurations that we apply to the storage ring. We use the X-ray synchrotron radiation diagnostic beamline to image the beam and we make measurements of the vertical dispersion and beam lifetime. We observe satisfactory agreement between the measurements and simulations. These methods can be used to adjust phase space coupling in a rational way and have applications to fine-tuning the vertical emittance and Touschek lifetime and measuring the gas scattering lifetime.
Single-pass beam measurements for the verification of the LHC magnetic model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Calaga, R.; Giovannozzi, M.; Redaelli, S.
2010-05-23
During the 2009 LHC injection tests, the polarities and effects of specific quadrupole and higher-order magnetic circuits were investigated. A set of magnet circuits had been selected for detailed investigation based on a number of criteria. On or off-momentum difference trajectories launched via appropriate orbit correctors for varying strength settings of the magnet circuits under study - e.g. main, trim and skew quadrupoles; sextupole families and spool piece correctors; skew sextupoles, octupoles - were compared with predictions from various optics models. These comparisons allowed confirming or updating the relative polarity conventions used in the optics model and the accelerator controlmore » system, as well as verifying the correct powering and assignment of magnet families. Results from measurements in several LHC sectors are presented.« less
Study on compensation algorithm of head skew in hard disk drives
NASA Astrophysics Data System (ADS)
Xiao, Yong; Ge, Xiaoyu; Sun, Jingna; Wang, Xiaoyan
2011-10-01
In hard disk drives (HDDs), head skew among multiple heads is pre-calibrated during manufacturing process. In real applications with high capacity of storage, the head stack may be tilted due to environmental change, resulting in additional head skew errors from outer diameter (OD) to inner diameter (ID). In case these errors are below the preset threshold for power on recalibration, the current strategy may not be aware, and drive performance under severe environment will be degraded. In this paper, in-the-field compensation of small DC head skew variation across stroke is proposed, where a zone table has been equipped. Test results demonstrating its effectiveness to reduce observer error and to enhance drive performance via accurate prediction of DC head skew are provided.
Dynamic Modeling from Flight Data with Unknown Time Skews
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
2016-01-01
A method for estimating dynamic model parameters from flight data with unknown time skews is described and demonstrated. The method combines data reconstruction, nonlinear optimization, and equation-error parameter estimation in the frequency domain to accurately estimate both dynamic model parameters and the relative time skews in the data. Data from a nonlinear F-16 aircraft simulation with realistic noise, instrumentation errors, and arbitrary time skews were used to demonstrate the approach. The approach was further evaluated using flight data from a subscale jet transport aircraft, where the measured data were known to have relative time skews. Comparison of modeling results obtained from time-skewed and time-synchronized data showed that the method accurately estimates both dynamic model parameters and relative time skew parameters from flight data with unknown time skews.
A Superstrong Adjustable Permanent Magnet for the Final Focus Quadrupole in a Linear Collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mihara, T.
A super strong permanent magnet quadrupole (PMQ) was fabricated and tested. It has an integrated strength of 28.5T with overall length of 10 cm and a 7mm bore radius. The final focus quadrupole of a linear collider needs a variable focal length. This can be obtained by slicing the magnet into pieces along the beamline direction and rotating these slices. But this technique may lead to movement of the magnetic center and introduction of a skew quadrupole component when the strength is varied. A ''double ring structure'' can ease these effects. A second prototype PMQ, containing thermal compensation materials andmore » with a double ring structure, has been fabricated. Worm gear is selected as the mechanical rotating scheme because the double ring structure needs a large torque to rotate magnets. The structure of the second prototype PMQ is shown.« less
Nuclear Quadrupole Moments and Nuclear Shell Structure
DOE R&D Accomplishments Database
Townes, C. H.; Foley, H. M.; Low, W.
1950-06-23
Describes a simple model, based on nuclear shell considerations, which leads to the proper behavior of known nuclear quadrupole moments, although predictions of the magnitudes of some quadrupole moments are seriously in error.
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning
McGregor, Heather R.; Mohatarem, Ayman
2017-01-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback. PMID:28753634
Dissociating error-based and reinforcement-based loss functions during sensorimotor learning.
Cashaback, Joshua G A; McGregor, Heather R; Mohatarem, Ayman; Gribble, Paul L
2017-07-01
It has been proposed that the sensorimotor system uses a loss (cost) function to evaluate potential movements in the presence of random noise. Here we test this idea in the context of both error-based and reinforcement-based learning. In a reaching task, we laterally shifted a cursor relative to true hand position using a skewed probability distribution. This skewed probability distribution had its mean and mode separated, allowing us to dissociate the optimal predictions of an error-based loss function (corresponding to the mean of the lateral shifts) and a reinforcement-based loss function (corresponding to the mode). We then examined how the sensorimotor system uses error feedback and reinforcement feedback, in isolation and combination, when deciding where to aim the hand during a reach. We found that participants compensated differently to the same skewed lateral shift distribution depending on the form of feedback they received. When provided with error feedback, participants compensated based on the mean of the skewed noise. When provided with reinforcement feedback, participants compensated based on the mode. Participants receiving both error and reinforcement feedback continued to compensate based on the mean while repeatedly missing the target, despite receiving auditory, visual and monetary reinforcement feedback that rewarded hitting the target. Our work shows that reinforcement-based and error-based learning are separable and can occur independently. Further, when error and reinforcement feedback are in conflict, the sensorimotor system heavily weights error feedback over reinforcement feedback.
The formulation and estimation of a spatial skew-normal generalized ordered-response model.
DOT National Transportation Integrated Search
2016-06-01
This paper proposes a new spatial generalized ordered response model with skew-normal kernel error terms and an : associated estimation method. It contributes to the spatial analysis field by allowing a flexible and parametric skew-normal : distribut...
Evaluation and Compensation of Detector Solenoid Effects in the JLEIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Zhang, Yuhong
2016-05-01
The JLEIC detector solenoid has a strong 3 T field in the IR area, and its tails extend over a range of several meters. One of the main effects of the solenoid field is coupling of the horizontal and vertical betatron motions which must be corrected in order to preserve the dynamical stability and beam spot size match at the IP. Additional effects include influence on the orbit and dispersion caused by the angle between the solenoid axis and the beam orbit. Meanwhile it affects ion polarization breaking the figure-8 spin symmetry. Crab dynamics further complicates the picture. All ofmore » these effects have to be compensated or accounted for. The proposed correction system is equivalent to the Rotating Frame Method. However, it does not involve physical rotation of elements. It provides local compensation of the solenoid effects independently for each side of the IR. It includes skew quadrupoles, dipole correctors and anti-solenoids to cancel perturbations to the orbit and linear optics. The skew quadrupoles and FFQ together generate an effect equivalent to adjustable rotation angle to do the decoupling task. Details of all of the correction systems are presented.« less
NASA Astrophysics Data System (ADS)
Reis, D. S.; Stedinger, J. R.; Martins, E. S.
2005-10-01
This paper develops a Bayesian approach to analysis of a generalized least squares (GLS) regression model for regional analyses of hydrologic data. The new approach allows computation of the posterior distributions of the parameters and the model error variance using a quasi-analytic approach. Two regional skew estimation studies illustrate the value of the Bayesian GLS approach for regional statistical analysis of a shape parameter and demonstrate that regional skew models can be relatively precise with effective record lengths in excess of 60 years. With Bayesian GLS the marginal posterior distribution of the model error variance and the corresponding mean and variance of the parameters can be computed directly, thereby providing a simple but important extension of the regional GLS regression procedures popularized by Tasker and Stedinger (1989), which is sensitive to the likely values of the model error variance when it is small relative to the sampling error in the at-site estimator.
Location tests for biomarker studies: a comparison using simulations for the two-sample case.
Scheinhardt, M O; Ziegler, A
2013-01-01
Gene, protein, or metabolite expression levels are often non-normally distributed, heavy tailed and contain outliers. Standard statistical approaches may fail as location tests in this situation. In three Monte-Carlo simulation studies, we aimed at comparing the type I error levels and empirical power of standard location tests and three adaptive tests [O'Gorman, Can J Stat 1997; 25: 269 -279; Keselman et al., Brit J Math Stat Psychol 2007; 60: 267- 293; Szymczak et al., Stat Med 2013; 32: 524 - 537] for a wide range of distributions. We simulated two-sample scenarios using the g-and-k-distribution family to systematically vary tail length and skewness with identical and varying variability between groups. All tests kept the type I error level when groups did not vary in their variability. The standard non-parametric U-test performed well in all simulated scenarios. It was outperformed by the two non-parametric adaptive methods in case of heavy tails or large skewness. Most tests did not keep the type I error level for skewed data in the case of heterogeneous variances. The standard U-test was a powerful and robust location test for most of the simulated scenarios except for very heavy tailed or heavy skewed data, and it is thus to be recommended except for these cases. The non-parametric adaptive tests were powerful for both normal and non-normal distributions under sample variance homogeneity. But when sample variances differed, they did not keep the type I error level. The parametric adaptive test lacks power for skewed and heavy tailed distributions.
NASA Astrophysics Data System (ADS)
Husain, Riyasat; Ghodke, A. D.
2017-08-01
Estimation and correction of the optics errors in an operational storage ring is always vital to achieve the design performance. To achieve this task, the most suitable and widely used technique, called linear optics from closed orbit (LOCO) is used in almost all storage ring based synchrotron radiation sources. In this technique, based on the response matrix fit, errors in the quadrupole strengths, beam position monitor (BPM) gains, orbit corrector calibration factors etc. can be obtained. For correction of the optics, suitable changes in the quadrupole strengths can be applied through the driving currents of the quadrupole power supplies to achieve the desired optics. The LOCO code has been used at the Indus-2 storage ring for the first time. The estimation of linear beam optics errors and their correction to minimize the distortion of linear beam dynamical parameters by using the installed number of quadrupole power supplies is discussed. After the optics correction, the performance of the storage ring is improved in terms of better beam injection/accumulation, reduced beam loss during energy ramping, and improvement in beam lifetime. It is also useful in controlling the leakage in the orbit bump required for machine studies or for commissioning of new beamlines.
The statistical properties and possible causes of polar motion prediction errors
NASA Astrophysics Data System (ADS)
Kosek, Wieslaw; Kalarus, Maciej; Wnek, Agnieszka; Zbylut-Gorska, Maria
2015-08-01
The pole coordinate data predictions from different prediction contributors of the Earth Orientation Parameters Combination of Prediction Pilot Project (EOPCPPP) were studied to determine the statistical properties of polar motion forecasts by looking at the time series of differences between them and the future IERS pole coordinates data. The mean absolute errors, standard deviations as well as the skewness and kurtosis of these differences were computed together with their error bars as a function of prediction length. The ensemble predictions show a little smaller mean absolute errors or standard deviations however their skewness and kurtosis values are similar as the for predictions from different contributors. The skewness and kurtosis enable to check whether these prediction differences satisfy normal distribution. The kurtosis values diminish with the prediction length which means that the probability distribution of these prediction differences is becoming more platykurtic than letptokurtic. Non zero skewness values result from oscillating character of these differences for particular prediction lengths which can be due to the irregular change of the annual oscillation phase in the joint fluid (atmospheric + ocean + land hydrology) excitation functions. The variations of the annual oscillation phase computed by the combination of the Fourier transform band pass filter and the Hilbert transform from pole coordinates data as well as from pole coordinates model data obtained from fluid excitations are in a good agreement.
Social contact patterns can buffer costs of forgetting in the evolution of cooperation.
Stevens, Jeffrey R; Woike, Jan K; Schooler, Lael J; Lindner, Stefan; Pachur, Thorsten
2018-06-13
Analyses of the evolution of cooperation often rely on two simplifying assumptions: (i) individuals interact equally frequently with all social network members and (ii) they accurately remember each partner's past cooperation or defection. Here, we examine how more realistic, skewed patterns of contact-in which individuals interact primarily with only a subset of their network's members-influence cooperation. In addition, we test whether skewed contact patterns can counteract the decrease in cooperation caused by memory errors (i.e. forgetting). Finally, we compare two types of memory error that vary in whether forgotten interactions are replaced with random actions or with actions from previous encounters. We use evolutionary simulations of repeated prisoner's dilemma games that vary agents' contact patterns, forgetting rates and types of memory error. We find that highly skewed contact patterns foster cooperation and also buffer the detrimental effects of forgetting. The type of memory error used also influences cooperation rates. Our findings reveal previously neglected but important roles of contact pattern, type of memory error and the interaction of contact pattern and memory on cooperation. Although cognitive limitations may constrain the evolution of cooperation, social contact patterns can counteract some of these constraints. © 2018 The Author(s).
DOE Office of Scientific and Technical Information (OSTI.GOV)
Summers, D. J.; Hart, T. L.; Acosta, J. G.
We propose a novel scheme for final muon ionization cooling with quadrupole doublets followed by emittance exchange in vacuum to achieve the small beam sizes needed by a muon collider. A flat muon beam with a series of quadrupole doublet half cells appears to provide the strong focusing required for final cooling. Each quadrupole doublet has a low beta region occupied by a dense, low Z absorber. After final cooling, normalized transverse, longitudinal, and angular momentum emittances of 0.100, 2.5, and 0.200 mm-rad are exchanged into 0.025, 70, and 0.0 mm-rad. A skew quadrupole triplet transforms a round muon bunchmore » with modest angular momentum into a flat bunch with no angular momentum. Thin electrostatic septa efficiently slice the flat bunch into 17 parts. The 17 bunches are interleaved into a 3.7 meter long train with RF deflector cavities. Snap bunch coalescence combines the muon bunch train longitudinally in a 21 GeV ring in 55 µs, one quarter of a synchrotron oscillation period. A linear long wavelength RF bucket gives each bunch a different energy causing the bunches to drift in the ring until they merge into one bunch and can be captured in a short wavelength RF bucket with a 13% muon decay loss and a packing fraction as high as 87 %.« less
Peeling Away Timing Error in NetFlow Data
NASA Astrophysics Data System (ADS)
Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin
In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.
Bunch Compression of Flat Beams
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Piot, P.; Edstrom Jr., D.
Flat beams can be produced via a linear manipulation of canonical-angular-momentum (CAM) dominated beams using a set of skew-quadrupole magnets. Recently, such beams were produced at Fermilab Accelerator Science and Technology (FAST) facility 1. In this paper we report the results of flat beam compression study in a magnetic chicane at an energy E ~ 32 MeV. Additionally, we investigate the effect of energy chirp in the round-to-flat beam transform. The experimental results are compared with numerical simulations.
Spin Transparent Siberian Snake And Spin Rotator With Solenoids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Koop, I. A.; Otboyev, A. V.; Shatunov, P. Yu.
2007-06-13
For intermediate energies of electrons and protons it happens that it is more convenient to construct Siberian snakes and spin rotators using solenoidal fields. Strong coupling caused by the solenoids is suppressed by a number of skew and normal quadrupole magnets. More complicate problem of the spin transparency of such devices also can be solved. This paper gives two examples: spin rotator for electron ring in the eRHIC project and Siberian snake for proton (antiproton) storage ring HESR, which cover whole machines working energy region.
NASA Astrophysics Data System (ADS)
Atta, Abdu; Yahaya, Sharipah; Zain, Zakiyah; Ahmed, Zalikha
2017-11-01
Control chart is established as one of the most powerful tools in Statistical Process Control (SPC) and is widely used in industries. The conventional control charts rely on normality assumption, which is not always the case for industrial data. This paper proposes a new S control chart for monitoring process dispersion using skewness correction method for skewed distributions, named as SC-S control chart. Its performance in terms of false alarm rate is compared with various existing control charts for monitoring process dispersion, such as scaled weighted variance S chart (SWV-S); skewness correction R chart (SC-R); weighted variance R chart (WV-R); weighted variance S chart (WV-S); and standard S chart (STD-S). Comparison with exact S control chart with regards to the probability of out-of-control detections is also accomplished. The Weibull and gamma distributions adopted in this study are assessed along with the normal distribution. Simulation study shows that the proposed SC-S control chart provides good performance of in-control probabilities (Type I error) in almost all the skewness levels and sample sizes, n. In the case of probability of detection shift the proposed SC-S chart is closer to the exact S control chart than the existing charts for skewed distributions, except for the SC-R control chart. In general, the performance of the proposed SC-S control chart is better than all the existing control charts for monitoring process dispersion in the cases of Type I error and probability of detection shift.
New Developments on the PSR Instability
NASA Astrophysics Data System (ADS)
Macek, Robert
2000-04-01
A strong, fast, transverse instability has long been observed at the Los Alamos Proton Storage Ring (PSR) where it is a limiting factor on peak intensity. Most of the characteristics and experimental data are consistent with a two-stream instability (e-p) arising from coupled oscillations of the proton beam and an electron cloud. In past operations, where the average intensity was limited by beam losses, the instability was controlled by sufficient rf voltage in the ring. The need for higher beam intensity has motivated new work to better understand and control the instability. Results will be presented from studies of the production and characteristics of the electron cloud at various locations in the ring for both stable and unstable beams and suppression of electron cloud generation by TiN coatings. Studies of additional or alternate controls include application of dual harmonic rf, damping of the instability by higher order multipoles, damping by X,Y coupling from skew quadrupoles and the use of inductive inserts to compensate longitudinal space charge forces. Use of a skew quadrupole, heated inductive inserts and higher rf voltage from a refurbished rf buncher has enabled the PSR to accumulate stable beam intensity up to 9.7 micro-Coulombs (6 E13 protons) per macropulse, a significant increase (60over the previous maximum of 6 micro-Coulombs (3.7 E13 protons). However, slow losses were rather high and must be reduced for routine operation at repetition rates of 20 Hz or higher.
A Vibrating Wire System For Quadrupole Fiducialization
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wolf, Zachary
2010-12-13
A vibrating wire system is being developed to fiducialize the quadrupoles between undulator segments in the LCLS. This note provides a detailed analysis of the system. The LCLS will have quadrupoles between the undulator segments to keep the electron beam focused. If the quadrupoles are not centered on the beam axis, the beam will receive transverse kicks, causing it to deviate from the undulator axis. Beam based alignment will be used to move the quadrupoles onto a straight line, but an initial, conventional alignment must place the quadrupole centers on a straight line to 100 {micro}m. In the fiducialization stepmore » of the initial alignment, the position of the center of the quadrupole is measured relative to tooling balls on the outside of the quadrupole. The alignment crews then use the tooling balls to place the magnet in the tunnel. The required error on the location of the quadrupole center relative to the tooling balls must be less than 25 {micro}m. In this note, we analyze a system under construction for the quadrupole fiducialization. The system uses the vibrating wire technique to position a wire onto the quadrupole magnetic axis. The wire position is then related to tooling balls using wire position detectors. The tooling balls on the wire position detectors are finally related to tooling balls on the quadrupole to perform the fiducialization. The total 25 {micro}m fiducialization error must be divided between these three steps. The wire must be positioned onto the quadrupole magnetic axis to within 10 {micro}m, the wire position must be measured relative to tooling balls on the wire position detectors to within 15 {micro}m, and tooling balls on the wire position detectors must be related to tooling balls on the quadrupole to within 10 {micro}m. The techniques used in these three steps will be discussed. The note begins by discussing various quadrupole fiducialization techniques used in the past and discusses why the vibrating wire technique is our method of choice. We then give an overview of the measurement system showing how the vibrating wire is positioned onto the quadrupole axis, how the wire position detectors locate the wire relative to tooling balls without touching the wire, and how the tooling ball positions are all measured. The novel feature of this system is the vibrating wire which we discuss in depth. We analyze the wire dynamics and calculate the expected sensitivity of the system. The note should be an aid in debugging the system by providing calculations to compare measurements to.« less
A proportional integral estimator-based clock synchronization protocol for wireless sensor networks.
Yang, Wenlun; Fu, Minyue
2017-11-01
Clock synchronization is an issue of vital importance in applications of WSNs. This paper proposes a proportional integral estimator-based protocol (EBP) to achieve clock synchronization for wireless sensor networks. As each local clock skew gradually drifts, synchronization accuracy will decline over time. Compared with existing consensus-based approaches, the proposed synchronization protocol improves synchronization accuracy under time-varying clock skews. Moreover, by restricting synchronization error of clock skew into a relative small quantity, it could reduce periodic re-synchronization frequencies. At last, a pseudo-synchronous implementation for skew compensation is introduced as synchronous protocol is unrealistic in practice. Numerical simulations are shown to illustrate the performance of the proposed protocol. Copyright © 2017 ISA. Published by Elsevier Ltd. All rights reserved.
The compensation of quadrupole errors and space charge effects by using trim quadrupoles
NASA Astrophysics Data System (ADS)
An, YuWen; Wang, Sheng
2011-12-01
The China Spallation Neutron Source (CSNS) accelerators consist of an H-linac and a proton Rapid Cycling Synchrotron (RCS). RCS is designed to accumulate and accelerate proton beam from 80 MeV to 1.6 GeV with a repetition rate of 25 Hz. The main dipole and quadruple magnet will operate in AC mode. Due to the adoption of the resonant power supplies, saturation errors of magnetic field cannot be compensated by power supplies. These saturation errors will disturb the linear optics parameters, such as tunes, beta function and dispersion function. The strong space charge effects will cause emittance growth. The compensation of these effects by using trim quadruples is studied, and the corresponding results are presented.
Fast frequency domain method to detect skew in a document image
NASA Astrophysics Data System (ADS)
Mehta, Sunita; Walia, Ekta; Dutta, Maitreyee
2015-12-01
In this paper, a new fast frequency domain method based on Discrete Wavelet Transform and Fast Fourier Transform has been implemented for the determination of the skew angle in a document image. Firstly, image size reduction is done by using two-dimensional Discrete Wavelet Transform and then skew angle is computed using Fast Fourier Transform. Skew angle error is almost negligible. The proposed method is experimented using a large number of documents having skew between -90° and +90° and results are compared with Moments with Discrete Wavelet Transform method and other commonly used existing methods. It has been determined that this method works more efficiently than the existing methods. Also, it works with typed, picture documents having different fonts and resolutions. It overcomes the drawback of the recently proposed method of Moments with Discrete Wavelet Transform that does not work with picture documents.
Liu, Geng; Niu, Junjie; Zhang, Chao; Guo, Guanlin
2015-12-01
Data distribution is usually skewed severely by the presence of hot spots in contaminated sites. This causes difficulties for accurate geostatistical data transformation. Three types of typical normal distribution transformation methods termed the normal score, Johnson, and Box-Cox transformations were applied to compare the effects of spatial interpolation with normal distribution transformation data of benzo(b)fluoranthene in a large-scale coking plant-contaminated site in north China. Three normal transformation methods decreased the skewness and kurtosis of the benzo(b)fluoranthene, and all the transformed data passed the Kolmogorov-Smirnov test threshold. Cross validation showed that Johnson ordinary kriging has a minimum root-mean-square error of 1.17 and a mean error of 0.19, which was more accurate than the other two models. The area with fewer sampling points and that with high levels of contamination showed the largest prediction standard errors based on the Johnson ordinary kriging prediction map. We introduce an ideal normal transformation method prior to geostatistical estimation for severely skewed data, which enhances the reliability of risk estimation and improves the accuracy for determination of remediation boundaries.
The nuclear electric quadrupole moment of copper.
Santiago, Régis Tadeu; Teodoro, Tiago Quevedo; Haiduke, Roberto Luiz Andrade
2014-06-21
The nuclear electric quadrupole moment (NQM) of the (63)Cu nucleus was determined from an indirect approach by combining accurate experimental nuclear quadrupole coupling constants (NQCCs) with relativistic Dirac-Coulomb coupled cluster calculations of the electric field gradient (EFG). The data obtained at the highest level of calculation, DC-CCSD-T, from 14 linear molecules containing the copper atom give rise to an indicated NQM of -198(10) mbarn. Such result slightly deviates from the previously accepted standard value given by the muonic method, -220(15) mbarn, although the error bars are superimposed.
Experimental studies on coherent synchrotron radiation at an emittance exchange beam line
NASA Astrophysics Data System (ADS)
Thangaraj, J. C. T.; Thurman-Keup, R.; Ruan, J.; Johnson, A. S.; Lumpkin, A. H.; Santucci, J.
2012-11-01
One of the goals of the Fermilab A0 photoinjector is to investigate experimentally the transverse to longitudinal emittance exchange (EEX) principle. Coherent synchrotron radiation in the emittance exchange line could limit the performance of the emittance exchanger at short bunch lengths. In this paper, we present experimental and simulation studies of the coherent synchrotron radiation (CSR) in the emittance exchange line at the A0 photoinjector. We report on time-resolved CSR studies using a skew-quadrupole technique. We also demonstrate the advantages of running the EEX with an energy-chirped beam.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Hyun, J.; Mihalcea, D.
A photocathode, immersed in solenoidal magnetic field, can produce canonical-angular-momentum (CAM) dominated or “magnetized” electron beams. Such beams have an application in electron cooling of hadron beams and can also be uncoupled to yield asymmetric-emittance (“flat”) beams. In the present paper we explore the possibilities of the flat beam generation at Fermilab’s Accelerator Science and Technology (FAST) facility. We present optimization of the beam flatness and four-dimensional transverse emittance and investigate the mapping and its limitations of the produced eigen-emittances to conventional emittances using a skew-quadrupole channel. Possible application of flat beams at the FAST facility are also discussed.
Generalized Skew Coefficients of Annual Peak Flows for Rural, Unregulated Streams in West Virginia
Atkins, John T.; Wiley, Jeffrey B.; Paybins, Katherine S.
2009-01-01
Generalized skew was determined from analysis of records from 147 streamflow-gaging stations in or near West Virginia. The analysis followed guidelines established by the Interagency Advisory Committee on Water Data described in Bulletin 17B, except that stations having 50 or more years of record were used instead of stations with the less restrictive recommendation of 25 or more years of record. The generalized-skew analysis included contouring, averaging, and regression of station skews. The best method was considered the one with the smallest mean square error (MSE). MSE is defined as the following quantity summed and divided by the number of peaks: the square of the difference of an individual logarithm (base 10) of peak flow less the mean of all individual logarithms of peak flow. Contouring of station skews was the best method for determining generalized skew for West Virginia, with a MSE of about 0.2174. This MSE is an improvement over the MSE of about 0.3025 for the national map presented in Bulletin 17B.
Economic values under inappropriate normal distribution assumptions.
Sadeghi-Sefidmazgi, A; Nejati-Javaremi, A; Moradi-Shahrbabak, M; Miraei-Ashtiani, S R; Amer, P R
2012-08-01
The objectives of this study were to quantify the errors in economic values (EVs) for traits affected by cost or price thresholds when skewed or kurtotic distributions of varying degree are assumed to be normal and when data with a normal distribution is subject to censoring. EVs were estimated for a continuous trait with dichotomous economic implications because of a price premium or penalty arising from a threshold ranging between -4 and 4 standard deviations from the mean. In order to evaluate the impacts of skewness, positive and negative excess kurtosis, standard skew normal, Pearson and the raised cosine distributions were used, respectively. For the various evaluable levels of skewness and kurtosis, the results showed that EVs can be underestimated or overestimated by more than 100% when price determining thresholds fall within a range from the mean that might be expected in practice. Estimates of EVs were very sensitive to censoring or missing data. In contrast to practical genetic evaluation, economic evaluation is very sensitive to lack of normality and missing data. Although in some special situations, the presence of multiple thresholds may attenuate the combined effect of errors at each threshold point, in practical situations there is a tendency for a few key thresholds to dominate the EV, and there are many situations where errors could be compounded across multiple thresholds. In the development of breeding objectives for non-normal continuous traits influenced by value thresholds, it is necessary to select a transformation that will resolve problems of non-normality or consider alternative methods that are less sensitive to non-normality.
Choosing the Best Correction Formula for the Pearson r[superscript 2] Effect Size
ERIC Educational Resources Information Center
Skidmore, Susan Troncoso; Thompson, Bruce
2011-01-01
In the present Monte Carlo simulation study, the authors compared bias and precision of 7 sampling error corrections to the Pearson r[superscript 2] under 6 x 3 x 6 conditions (i.e., population ρ values of 0.0, 0.1, 0.3, 0.5, 0.7, and 0.9, respectively; population shapes normal, skewness = kurtosis = 1, and skewness = -1.5 with kurtosis =…
Measuring skewness of red blood cell deformability distribution by laser ektacytometry
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nikitin, S Yu; Priezzhev, A V; Lugovtsov, A E
An algorithm is proposed for measuring the parameters of red blood cell deformability distribution based on laser diffractometry of red blood cells in shear flow (ektacytometry). The algorithm is tested on specially prepared samples of rat blood. In these experiments we succeeded in measuring the mean deformability, deformability variance and skewness of red blood cell deformability distribution with errors of 10%, 15% and 35%, respectively. (laser biophotonics)
Leão, William L.; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor’s 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model. PMID:29333210
Leão, William L; Abanto-Valle, Carlos A; Chen, Ming-Hui
2017-01-01
A stochastic volatility-in-mean model with correlated errors using the generalized hyperbolic skew Student-t (GHST) distribution provides a robust alternative to the parameter estimation for daily stock returns in the absence of normality. An efficient Markov chain Monte Carlo (MCMC) sampling algorithm is developed for parameter estimation. The deviance information, the Bayesian predictive information and the log-predictive score criterion are used to assess the fit of the proposed model. The proposed method is applied to an analysis of the daily stock return data from the Standard & Poor's 500 index (S&P 500). The empirical results reveal that the stochastic volatility-in-mean model with correlated errors and GH-ST distribution leads to a significant improvement in the goodness-of-fit for the S&P 500 index returns dataset over the usual normal model.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2017-12-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
Boes, Kelsey S; Roberts, Michael S; Vinueza, Nelson R
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R 2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R 2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. Graphical Abstract ᅟ.
Measuring the Magnetic Center Behavior of an ILC Superconducting Quadrupole Prototype
DOE Office of Scientific and Technical Information (OSTI.GOV)
Spencer, Cherrill M.; Adolphsen, Chris; Berndt, Martin
2011-02-07
The main linacs of the proposed International Linear Collider (ILC) consist of superconducting cavities operated at 2K. The accelerating cavities are contained in a contiguous series of cryogenic modules that also house the main linac quadrupoles, thus the quadrupoles also need to be superconducting. In an early ILC design, these magnets are about 0.6 m long, have cos (2{theta}) coils, and operate at constant field gradients up to 60 T/m. In order to preserve the small beam emittances in the ILC linacs, the e+ and e- beams need to traverse the quadrupoles near their magnetic centers. A quadrupole shunting techniquemore » is used to measure the quadrupole alignment with the beams; this process requires the magnetic centers move by no more than about 5 micrometers when their strength is changed. To determine if such tight stability is achievable in a superconducting quadrupole, we at SLAC measured the magnetic center motions in a prototype ILC quadrupole built at CIEMAT in Spain. A rotating coil technique was used with a better than 0.1 micrometer precision in the relative field center position, and less than a 2 micrometer systematic error over 30 minutes. This paper describes the warm-bore cryomodule that houses the quadrupole in its Helium vessel, the magnetic center measurement system, the measured center data and strength and harmonics magnetic data.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, X; Yang, F
Purpose: Knowing MLC leaf positioning error over the course of treatment would be valuable for treatment planning, QA design, and patient safety. The objective of the current study was to quantify the MLC positioning accuracy for VMAT delivery of head and neck treatment plans. Methods: A total of 837 MLC log files were collected from 14 head and neck cancer patients undergoing full arc VMAT treatment on one Varian Trilogy machine. The actual and planned leaf gaps were extracted from the retrieved MLC log files. For a given patient, the leaf gap error percentage (LGEP), defined as the ratio ofmore » the actual leaf gap over the planned, was evaluated for each leaf pair at all the gantry angles recorded over the course of the treatment. Statistics describing the distribution of the largest LGEP (LLGEP) of the 60 leaf pairs including the maximum, minimum, mean, Kurtosis, and skewness were evaluated. Results: For the 14 studied patients, their PTV located at tonsil, base of tongue, larynx, supraglottis, nasal cavity, and thyroid gland with volume ranging from 72.0 cm{sup 3} to 602.0 cm{sup 3}. The identified LLGEP differed between patients. It ranged from 183.9% to 457.7% with a mean of 368.6%. For the majority of the patients, the LLGEP distributions peaked at non-zero positions and showed no obvious dependence on gantry rotations. Kurtosis and skewness, with minimum/maximum of 66.6/217.9 and 6.5/12.6, respectively, suggested relatively more peaked while right-skewed leaf error distribution pattern. Conclusion: The results indicate pattern of MLC leaf gap error differs between patients of lesion located at similar anatomic site. Understanding the systemic mechanisms underlying these observed error patterns necessitates examining more patient-specific plan parameters in a large patient cohort setting.« less
Xing, Dongyuan; Huang, Yangxin; Chen, Henian; Zhu, Yiliang; Dagne, Getachew A; Baldwin, Julie
2017-08-01
Semicontinuous data featured with an excessive proportion of zeros and right-skewed continuous positive values arise frequently in practice. One example would be the substance abuse/dependence symptoms data for which a substantial proportion of subjects investigated may report zero. Two-part mixed-effects models have been developed to analyze repeated measures of semicontinuous data from longitudinal studies. In this paper, we propose a flexible two-part mixed-effects model with skew distributions for correlated semicontinuous alcohol data under the framework of a Bayesian approach. The proposed model specification consists of two mixed-effects models linked by the correlated random effects: (i) a model on the occurrence of positive values using a generalized logistic mixed-effects model (Part I); and (ii) a model on the intensity of positive values using a linear mixed-effects model where the model errors follow skew distributions including skew- t and skew-normal distributions (Part II). The proposed method is illustrated with an alcohol abuse/dependence symptoms data from a longitudinal observational study, and the analytic results are reported by comparing potential models under different random-effects structures. Simulation studies are conducted to assess the performance of the proposed models and method.
On the Yakhot-Orszag renormalization group method for deriving turbulence statistics and models
NASA Technical Reports Server (NTRS)
Smith, L. M.; Reynolds, W. C.
1992-01-01
An independent, comprehensive, critical review of the 'renormalization group' (RNG) theory of turbulence developed by Yakhot and Orszag (1986) is provided. Their basic theory for the Navier-Stokes equations is confirmed, and approximations in the scale removal procedure are discussed. The YO derivations of the velocity-derivative skewness and the transport equation for the energy dissipation rate are examined. An algebraic error in the derivation of the skewness is corrected. The corrected RNG skewness value of -0.59 is in agreement with experiments at moderate Reynolds numbers. Several problems are identified in the derivation of the energy dissipation rate equations which suggest that the derivation should be reformulated.
Search for Quadrupole Strength in the Electroexcitation of the Delta+ (1232)
DOE Office of Scientific and Technical Information (OSTI.GOV)
C. Mertz; C. Vellidis; Ricardo Alarcon
2001-04-01
High precision 1H(e, e'p)pi0 measurements at Q2 = 0.126. (GeV/c)2 are reported, which allow the determination of quadrupole amplitudes in the gamma*N --> Delta transition; they simultaneously test the reliability of electroproduction models. The derived quadrupole-to-dipole (I = 3/2) amplitude ratios, RSM = (-6.5 +/- 0.2stat+sys+/-2.5mod)% and REM = 9-2.1 +/-0.2stat+sys +/-2.0mod)%, are dominated by model error. Previous RSM and REM results should be reconsidered after the model uncertainties associated with the method of their extraction are taken into account.
NASA Astrophysics Data System (ADS)
Xiao, C.; Groening, L.; Gerhard, P.; Maier, M.; Mickat, S.; Vormann, H.
2016-06-01
Knowledge of the transverse four-dimensional beam rms-parameters is essential for applications that involve lattice elements that couple the two transverse degrees of freedom (planes). Usually pepper-pots are used for measuring these beam parameters. However, for ions their application is limited to energies below 150 keV/u. This contribution is on measurements of the full transverse four-dimensional second-moments beam matrix of high intensity uranium ions at an energy of 11.4 MeV/u. The combination of skew quadrupoles with a slit/grid emittance measurement device has been successfully applied.
Design of general apochromatic drift-quadrupole beam lines
NASA Astrophysics Data System (ADS)
Lindstrøm, C. A.; Adli, E.
2016-07-01
Chromatic errors are normally corrected using sextupoles in regions of large dispersion. In low emittance linear accelerators, use of sextupoles can be challenging. Apochromatic focusing is a lesser-known alternative approach, whereby chromatic errors of Twiss parameters are corrected without the use of sextupoles, and has consequently been subject to renewed interest in advanced linear accelerator research. Proof of principle designs were first established by Montague and Ruggiero and developed more recently by Balandin et al. We describe a general method for designing drift-quadrupole beam lines of arbitrary order in apochromatic correction, including analytic expressions for emittance growth and other merit functions. Worked examples are shown for plasma wakefield accelerator staging optics and for a simple final focus system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gilson, Erik P.; Davidson, Ronald C.; Efthimion, Philip C.
Transverse dipole and quadrupole modes have been excited in a one-component cesium ion plasma trapped in the Paul Trap Simulator Experiment (PTSX) in order to characterize their properties and understand the effect of their excitation on equivalent long-distance beam propagation. The PTSX device is a compact laboratory Paul trap that simulates the transverse dynamics of a long, intense charge bunch propagating through an alternating-gradient transport system by putting the physicist in the beam's frame of reference. A pair of arbitrary function generators was used to apply trapping voltage waveform perturbations with a range of frequencies and, by changing which electrodesmore » were driven with the perturbation, with either a dipole or quadrupole spatial structure. The results presented in this paper explore the dependence of the perturbation voltage's effect on the perturbation duration and amplitude. Perturbations were also applied that simulate the effect of random lattice errors that exist in an accelerator with quadrupole magnets that are misaligned or have variance in their field strength. The experimental results quantify the growth in the equivalent transverse beam emittance that occurs due to the applied noise and demonstrate that the random lattice errors interact with the trapped plasma through the plasma's internal collective modes. Coherent periodic perturbations were applied to simulate the effects of magnet errors in circular machines such as storage rings. The trapped one component plasma is strongly affected when the perturbation frequency is commensurate with a plasma mode frequency. The experimental results, which help to understand the physics of quiescent intense beam propagation over large distances, are compared with analytic models.« less
Dip and anisotropy effects on flow using a vertically skewed model grid.
Hoaglund, John R; Pollard, David
2003-01-01
Darcy flow equations relating vertical and bedding-parallel flow to vertical and bedding-parallel gradient components are derived for a skewed Cartesian grid in a vertical plane, correcting for structural dip given the principal hydraulic conductivities in bedding-parallel and bedding-orthogonal directions. Incorrect-minus-correct flow error results are presented for ranges of structural dip (0 < or = theta < or = 90) and gradient directions (0 < or = phi < or = 360). The equations can be coded into ground water models (e.g., MODFLOW) that can use a skewed Cartesian coordinate system to simulate flow in structural terrain with deformed bedding planes. Models modified with these equations will require input arrays of strike and dip, and a solver that can handle off-diagonal hydraulic conductivity terms.
Cain, Meghan K; Zhang, Zhiyong; Yuan, Ke-Hai
2017-10-01
Nonnormality of univariate data has been extensively examined previously (Blanca et al., Methodology: European Journal of Research Methods for the Behavioral and Social Sciences, 9(2), 78-84, 2013; Miceeri, Psychological Bulletin, 105(1), 156, 1989). However, less is known of the potential nonnormality of multivariate data although multivariate analysis is commonly used in psychological and educational research. Using univariate and multivariate skewness and kurtosis as measures of nonnormality, this study examined 1,567 univariate distriubtions and 254 multivariate distributions collected from authors of articles published in Psychological Science and the American Education Research Journal. We found that 74 % of univariate distributions and 68 % multivariate distributions deviated from normal distributions. In a simulation study using typical values of skewness and kurtosis that we collected, we found that the resulting type I error rates were 17 % in a t-test and 30 % in a factor analysis under some conditions. Hence, we argue that it is time to routinely report skewness and kurtosis along with other summary statistics such as means and variances. To facilitate future report of skewness and kurtosis, we provide a tutorial on how to compute univariate and multivariate skewness and kurtosis by SAS, SPSS, R and a newly developed Web application.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher; Gail, Alexander
2015-04-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement ("jump") consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. Copyright © 2015 the American Physiological Society.
Westendorff, Stephanie; Kuang, Shenbing; Taghizadeh, Bahareh; Donchin, Opher
2015-01-01
Different error signals can induce sensorimotor adaptation during visually guided reaching, possibly evoking different neural adaptation mechanisms. Here we investigate reach adaptation induced by visual target errors without perturbing the actual or sensed hand position. We analyzed the spatial generalization of adaptation to target error to compare it with other known generalization patterns and simulated our results with a neural network model trained to minimize target error independent of prediction errors. Subjects reached to different peripheral visual targets and had to adapt to a sudden fixed-amplitude displacement (“jump”) consistently occurring for only one of the reach targets. Subjects simultaneously had to perform contralateral unperturbed saccades, which rendered the reach target jump unnoticeable. As a result, subjects adapted by gradually decreasing reach errors and showed negative aftereffects for the perturbed reach target. Reach errors generalized to unperturbed targets according to a translational rather than rotational generalization pattern, but locally, not globally. More importantly, reach errors generalized asymmetrically with a skewed generalization function in the direction of the target jump. Our neural network model reproduced the skewed generalization after adaptation to target jump without having been explicitly trained to produce a specific generalization pattern. Our combined psychophysical and simulation results suggest that target jump adaptation in reaching can be explained by gradual updating of spatial motor goal representations in sensorimotor association networks, independent of learning induced by a prediction-error about the hand position. The simulations make testable predictions about the underlying changes in the tuning of sensorimotor neurons during target jump adaptation. PMID:25609106
Etzel, C J; Shete, S; Beasley, T M; Fernandez, J R; Allison, D B; Amos, C I
2003-01-01
Non-normality of the phenotypic distribution can affect power to detect quantitative trait loci in sib pair studies. Previously, we observed that Winsorizing the sib pair phenotypes increased the power of quantitative trait locus (QTL) detection for both Haseman-Elston (HE) least-squares tests [Hum Hered 2002;53:59-67] and maximum likelihood-based variance components (MLVC) analysis [Behav Genet (in press)]. Winsorizing the phenotypes led to a slight increase in type 1 error in H-E tests and a slight decrease in type I error for MLVC analysis. Herein, we considered transforming the sib pair phenotypes using the Box-Cox family of transformations. Data were simulated for normal and non-normal (skewed and kurtic) distributions. Phenotypic values were replaced by Box-Cox transformed values. Twenty thousand replications were performed for three H-E tests of linkage and the likelihood ratio test (LRT), the Wald test and other robust versions based on the MLVC method. We calculated the relative nominal inflation rate as the ratio of observed empirical type 1 error divided by the set alpha level (5, 1 and 0.1% alpha levels). MLVC tests applied to non-normal data had inflated type I errors (rate ratio greater than 1.0), which were controlled best by Box-Cox transformation and to a lesser degree by Winsorizing. For example, for non-transformed, skewed phenotypes (derived from a chi2 distribution with 2 degrees of freedom), the rates of empirical type 1 error with respect to set alpha level=0.01 were 0.80, 4.35 and 7.33 for the original H-E test, LRT and Wald test, respectively. For the same alpha level=0.01, these rates were 1.12, 3.095 and 4.088 after Winsorizing and 0.723, 1.195 and 1.905 after Box-Cox transformation. Winsorizing reduced inflated error rates for the leptokurtic distribution (derived from a Laplace distribution with mean 0 and variance 8). Further, power (adjusted for empirical type 1 error) at the 0.01 alpha level ranged from 4.7 to 17.3% across all tests using the non-transformed, skewed phenotypes, from 7.5 to 20.1% after Winsorizing and from 12.6 to 33.2% after Box-Cox transformation. Likewise, power (adjusted for empirical type 1 error) using leptokurtic phenotypes at the 0.01 alpha level ranged from 4.4 to 12.5% across all tests with no transformation, from 7 to 19.2% after Winsorizing and from 4.5 to 13.8% after Box-Cox transformation. Thus the Box-Cox transformation apparently provided the best type 1 error control and maximal power among the procedures we considered for analyzing a non-normal, skewed distribution (chi2) while Winzorizing worked best for the non-normal, kurtic distribution (Laplace). We repeated the same simulations using a larger sample size (200 sib pairs) and found similar results. Copyright 2003 S. Karger AG, Basel
NASA Technical Reports Server (NTRS)
Wilson, Michael J.; Sherwin, Blake D.; Hill, J. Collin; Addison, Graeme; Battaglia, Nick; Bond, J. Richard; Das, Sudeep; Devlin, Mark J.; Dunkley, Joanna; Duenner, Rolando;
2012-01-01
We present a detection of the unnormalized skewness (T(sup )(sup 2)(n(circumflex)) induced by the thermal Sunyaev-Zel'dovich (tSZ) effect in filtered Atacama Cosmology Telescope (ACT) 148 GHz cosmic microwave background temperature maps. Contamination due to infrared and radio sources is minimized by template subtraction of resolved sources and by constructing a mask using outlying values in the 218 GHz (tSZ-null) ACT maps. We measure (T(sup )(sup 3) (n(circumflex)) = -31 plus or minus 6 micro-K(sup 3) (measurement error only) or plus or minus 14 micro-K(sup 3) (including cosmic variance error) in the filtered ACT data, a 5sigma detection. We show that the skewness is a sensitive probe of sigma(sub 8), and use analytic calculations and tSZ simulations to obtain cosmological constraints from this measurement. From this signal alone we infer a value of sigma(sub 8) = 0.78 sup +0.03 sub -0.04 (68% C.L.) sup +0.05 sub -0.16. Our results demonstrate that measurements of nonGaussianity can be a useful method for characterizing the tSZ effect and extracting the underlying cosmological information.
Analysis of field errors for LARP Nb 3Sn HQ03 quadrupole magnet
Wang, Xiaorong; Ambrosio, Giorgio; Chlachidze, Guram; ...
2016-12-01
The U.S. LHC Accelerator Research Program, in close collaboration with CERN, has developed three generations of high-gradient quadrupole (HQ) Nb 3Sn model magnets, to support the development of the 150 mm aperture Nb 3Sn quadrupole magnets for the High-Luminosity LHC. The latest generation, HQ03, featured coils with better uniformity of coil dimensions and properties than the earlier generations. We tested the HQ03 magnet at FNAL, including the field quality study. The profiles of low-order harmonics along the magnet aperture observed at 15 kA, 1.9 K can be traced back to the assembled coil pack before the magnet assembly. Based onmore » the measured harmonics in the magnet center region, the coil block positioning tolerance was analyzed and compared with earlier HQ01 and HQ02 magnets to correlate with coil and magnet fabrication. Our study the capability of correcting the low-order non-allowed field errors, magnetic shims were installed in HQ03. Furthermore, the expected shim contribution agreed well with the calculation. For the persistent-current effect, the measured a4 can be related to 4% higher in the strand magnetization of one coil with respect to the other three coils. Lastly, we compare the field errors due to the inter-strand coupling currents between HQ03 and HQ02.« less
The structure of mode-locking regions of piecewise-linear continuous maps: II. Skew sawtooth maps
NASA Astrophysics Data System (ADS)
Simpson, D. J. W.
2018-05-01
In two-parameter bifurcation diagrams of piecewise-linear continuous maps on , mode-locking regions typically have points of zero width known as shrinking points. Near any shrinking point, but outside the associated mode-locking region, a significant proportion of parameter space can be usefully partitioned into a two-dimensional array of annular sectors. The purpose of this paper is to show that in these sectors the dynamics is well-approximated by a three-parameter family of skew sawtooth circle maps, where the relationship between the skew sawtooth maps and the N-dimensional map is fixed within each sector. The skew sawtooth maps are continuous, degree-one, and piecewise-linear, with two different slopes. They approximate the stable dynamics of the N-dimensional map with an error that goes to zero with the distance from the shrinking point. The results explain the complicated radial pattern of periodic, quasi-periodic, and chaotic dynamics that occurs near shrinking points.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-07-01
We derive the essentials of the skewed weak lensing likelihood via a simple hierarchical forward model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of Lambda cold dark matter. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from cosmic microwave background analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30 per cent of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
The skewed weak lensing likelihood: why biases arise, despite data and theory being sound.
NASA Astrophysics Data System (ADS)
Sellentin, Elena; Heymans, Catherine; Harnois-Déraps, Joachim
2018-04-01
We derive the essentials of the skewed weak lensing likelihood via a simple Hierarchical Forward Model. Our likelihood passes four objective and cosmology-independent tests which a standard Gaussian likelihood fails. We demonstrate that sound weak lensing data are naturally biased low, since they are drawn from a skewed distribution. This occurs already in the framework of ΛCDM. Mathematically, the biases arise because noisy two-point functions follow skewed distributions. This form of bias is already known from CMB analyses, where the low multipoles have asymmetric error bars. Weak lensing is more strongly affected by this asymmetry as galaxies form a discrete set of shear tracer particles, in contrast to a smooth shear field. We demonstrate that the biases can be up to 30% of the standard deviation per data point, dependent on the properties of the weak lensing survey and the employed filter function. Our likelihood provides a versatile framework with which to address this bias in future weak lensing analyses.
Lu, Tao; Lu, Minggen; Wang, Min; Zhang, Jun; Dong, Guang-Hui; Xu, Yong
2017-12-18
Longitudinal competing risks data frequently arise in clinical studies. Skewness and missingness are commonly observed for these data in practice. However, most joint models do not account for these data features. In this article, we propose partially linear mixed-effects joint models to analyze skew longitudinal competing risks data with missingness. In particular, to account for skewness, we replace the commonly assumed symmetric distributions by asymmetric distribution for model errors. To deal with missingness, we employ an informative missing data model. The joint models that couple the partially linear mixed-effects model for the longitudinal process, the cause-specific proportional hazard model for competing risks process and missing data process are developed. To estimate the parameters in the joint models, we propose a fully Bayesian approach based on the joint likelihood. To illustrate the proposed model and method, we implement them to an AIDS clinical study. Some interesting findings are reported. We also conduct simulation studies to validate the proposed method.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Steier, C.; Marks, S.; Prestemon, Soren
Since about 5 years, Apple-II type Elliptically Polarizing Undulators (EPU) have been used very successfully at the ALS to generate high brightness photon beams with arbitrary polarization. However, both EPUs installed so far cause significant changes of the vertical beamsize, especially when the row phase is changed to change the polarization of the photons emitted. Detailed measurements indicate this is caused by a row phase dependent skew quadrupole term in the EPUs. Magnetic measurements revealed the same effect for the third EPU to be installed later this year. All measurements to identify and quantify the effect with beam will bemore » presented, as well as some results of magnetic bench measurements and numeric field simulations.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Halavanau, A.; Piot, P.; Edstrom Jr., D.
Canonical-angular-momentum (CAM) dominated beams can be formed in photoinjectors by applying an axial magnetic field on the photocathode surface. Such a beam possess asymmetric eigenemittances and is characterized by the measure of its magnetization. CAM removal with a set of skew-quadrupole magnets maps the beam eigenemittances to the conventional emittances along each transverse degree of freedom, thereby yielding a flat beam with asymmetric transverse emittance. In this paper, we report on the ex- perimental generation of CAM dominated beam and their subsequent transformation into flat beams at the Fermilab Accelerator Science and Technology (FAST) facility 1. Our results are comparedmore » with numerical simulations and possible applications of the produced beams are discussed.« less
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis.
Lin, Johnny; Bentler, Peter M
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne's asymptotically distribution-free method and Satorra Bentler's mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler's statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby's study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic.
Simonsohn, Uri; Simmons, Joseph P; Nelson, Leif D
2015-12-01
When studies examine true effects, they generate right-skewed p-curves, distributions of statistically significant results with more low (.01 s) than high (.04 s) p values. What else can cause a right-skewed p-curve? First, we consider the possibility that researchers report only the smallest significant p value (as conjectured by Ulrich & Miller, 2015), concluding that it is a very uncommon problem. We then consider more common problems, including (a) p-curvers selecting the wrong p values, (b) fake data, (c) honest errors, and (d) ambitiously p-hacked (beyond p < .05) results. We evaluate the impact of these common problems on the validity of p-curve analysis, and provide practical solutions that substantially increase its robustness. (c) 2015 APA, all rights reserved).
Field Tolerances for the Triplet Quadrupoles of the LHC High Luminosity Lattice
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nosochkov, Yuri; Cai, Y.; Jiao, Y.
2012-06-25
It has been proposed to implement the so-called Achromatic Telescopic Squeezing (ATS) scheme in the LHC high luminosity (HL) lattice to reduce beta functions at the Interaction Points (IP) up to a factor of 8. As a result, the nominal 4.5 km peak beta functions reached in the Inner Triplets (IT) at collision will be increased by the same factor. This, therefore, justifies the installation of new, larger aperture, superconducting IT quadrupoles. The higher beta functions will enhance the effects of the triplet quadrupole field errors leading to smaller beam dynamic aperture (DA). To maintain the acceptable DA, the effectsmore » of the triplet field errors must be re-evaluated, thus specifying new tolerances. Such a study has been performed for the so-called '4444' collision option of the HL-LHC layout version SLHCV3.01, where the IP beta functions are reduced by a factor of 4 in both planes with respect to a pre-squeezed value of 60 cm at two collision points. The dynamic aperture calculations were performed using SixTrack. The impact on the triplet field quality is presented.« less
Refractive Status and Prevalence of Refractive Errors in Suburban School-age Children
Pi, Lian-Hong; Chen, Lin; Liu, Qin; Ke, Ning; Fang, Jing; Zhang, Shu; Xiao, Jun; Ye, Wei-Jiang; Xiong, Yan; Shi, Hui; Yin, Zheng-Qin
2010-01-01
Objective: This study investigated the distribution pattern of refractive status and prevalence of refractive errors in school-age children in Western China to determine the possible environmental factors. Methods: A random sampling strategy in geographically defined clusters was used to identify children aged 6-15 years in Yongchuan, a socio-economically representative area in Western China. We carried out a door-to-door survey and actual eye examinations, including visual acuity measurements, stereopsis examination, anterior segment and eyeball movements, fundus examinations, and cycloplegic retinoscopy with 1% cyclopentolate. Results: A total of 3469 children living in 2552 households were selected, and 3070 were examined. The distributions of refractive status were positively-skewed for 6-8-year-olds, and negatively-skewed for 9-12 and 13-15-year-olds. The prevalence of hyperopia (≥+2.00 D spherical equivalent [SE]), myopia (≤-0.50 D SE), and astigmatism (≥1.00 diopter of cylinder [DC]) were 3.26%, 13.75%, and 3.75%, respectively. As children's ages increased, the prevalence rate of hyperopia decreased (P<0.001) and that of myopia increased significantly (P<0.001). Children in academically challenging schools had a higher risk of myopia (P<0.001) and astigmatism (≥1.00DC, P =0.04) than those in regular schools. Conclusion: The distribution of refractive status changes gradually from positively-skewed to negatively-skewed distributions as age increases, with 9-year-old being the critical age for the changes. Environmental factors and study intensity influence the occurrence and development of myopia. PMID:20975844
NASA Astrophysics Data System (ADS)
Lahmiri, S.; Boukadoum, M.
2015-10-01
Accurate forecasting of stock market volatility is an important issue in portfolio risk management. In this paper, an ensemble system for stock market volatility is presented. It is composed of three different models that hybridize the exponential generalized autoregressive conditional heteroscedasticity (GARCH) process and the artificial neural network trained with the backpropagation algorithm (BPNN) to forecast stock market volatility under normal, t-Student, and generalized error distribution (GED) assumption separately. The goal is to design an ensemble system where each single hybrid model is capable to capture normality, excess skewness, or excess kurtosis in the data to achieve complementarity. The performance of each EGARCH-BPNN and the ensemble system is evaluated by the closeness of the volatility forecasts to realized volatility. Based on mean absolute error and mean of squared errors, the experimental results show that proposed ensemble model used to capture normality, skewness, and kurtosis in data is more accurate than the individual EGARCH-BPNN models in forecasting the S&P 500 intra-day volatility based on one and five-minute time horizons data.
Lu, Tao
2017-01-01
The joint modeling of mean and variance for longitudinal data is an active research area. This type of model has the advantage of accounting for heteroscedasticity commonly observed in between and within subject variations. Most of researches focus on improving the estimating efficiency but ignore many data features frequently encountered in practice. In this article, we develop a mixed-effects location scale joint model that concurrently accounts for longitudinal data with multiple features. Specifically, our joint model handles heterogeneity, skewness, limit of detection, measurement errors in covariates which are typically observed in the collection of longitudinal data from many studies. We employ a Bayesian approach for making inference on the joint model. The proposed model and method are applied to an AIDS study. Simulation studies are performed to assess the performance of the proposed method. Alternative models under different conditions are compared.
Programmable Differential Delay Circuit With Fine Delay Adjustment
DeRyckere, John F.; Jenkins, Philip Nord; Cornett, Frank Nolan
2002-07-09
Circuitry that provides additional delay to early arriving signals such that all data signals arrive at a receiving latch with same path delay. The delay of a forwarded clock reference is also controlled such that the capturing clock edge will be optimally positioned near quadrature (depending on latch setup/hold requirements). The circuitry continuously adapts to data and clock path delay changes and digital filtering of phase measurements reduce errors brought on by jittering data edges. The circuitry utilizes only the minimum amount of delay necessary to achieve objective thereby limiting any unintended jitter. Particularly, this programmable differential delay circuit with fine delay adjustment is designed to allow the skew between ASICS to be minimized. This includes skew between data bits, between data bits and clocks as well as minimizing the overall skew in a channel between ASICS.
Machine Imperfection Studies of the RAON Superconducting Linac
NASA Astrophysics Data System (ADS)
Jeon, D.; Jang, J.-H.; Jin, H.
2018-05-01
Studies of the machine imperfections in the RAON superconducting linac (SCL) that employs normal conducting (NC) quadrupoles were done to assess the tolerable error budgets of the machine imperfections that ensure operation of the beam. The studies show that the beam loss requirement is met even before the orbit correction and that the beam loss requirement is met even without the MHB (multi-harmonic buncher) and VE (velocity equalizer) thanks to the RAON's radio-frequency quadrupole (RFQ) design feature. For the low energy section of the linac (SCL3), a comparison is made between the two superconducting linac lattice types: one lattice that employs NC quadrupoles and the other that employs SC solenoids. The studies show that both lattices meet the beam loss requirement after the orbit correction. However, before the orbit correction, the lattice employing SC solenoids does not meet the beam loss requirement and can cause a significant beam loss, while the lattice employing NC quadrupoles meets the requirement. For the lattice employing SC solenoids, care must be taken during the beam commissioning.
Hughes, Paul; Deng, Wenjie; Olson, Scott C; Coombs, Robert W; Chung, Michael H; Frenkel, Lisa M
2016-03-01
Accurate analysis of minor populations of drug-resistant HIV requires analysis of a sufficient number of viral templates. We assessed the effect of experimental conditions on the analysis of HIV pol 454 pyrosequences generated from plasma using (1) the "Insertion-deletion (indel) and Carry Forward Correction" (ICC) pipeline, which clusters sequence reads using a nonsubstitution approach and can correct for indels and carry forward errors, and (2) the "Primer Identification (ID)" method, which facilitates construction of a consensus sequence to correct for sequencing errors and allelic skewing. The Primer ID and ICC methods produced similar estimates of viral diversity, but differed in the number of sequence variants generated. Sequence preparation for ICC was comparably simple, but was limited by an inability to assess the number of templates analyzed and allelic skewing. The more costly Primer ID method corrected for allelic skewing and provided the number of viral templates analyzed, which revealed that amplifiable HIV templates varied across specimens and did not correlate with clinical viral load. This latter observation highlights the value of the Primer ID method, which by determining the number of templates amplified, enables more accurate assessment of minority species in the virus population, which may be relevant to prescribing effective antiretroviral therapy.
A Third Moment Adjusted Test Statistic for Small Sample Factor Analysis
Lin, Johnny; Bentler, Peter M.
2012-01-01
Goodness of fit testing in factor analysis is based on the assumption that the test statistic is asymptotically chi-square; but this property may not hold in small samples even when the factors and errors are normally distributed in the population. Robust methods such as Browne’s asymptotically distribution-free method and Satorra Bentler’s mean scaling statistic were developed under the presumption of non-normality in the factors and errors. This paper finds new application to the case where factors and errors are normally distributed in the population but the skewness of the obtained test statistic is still high due to sampling error in the observed indicators. An extension of Satorra Bentler’s statistic is proposed that not only scales the mean but also adjusts the degrees of freedom based on the skewness of the obtained test statistic in order to improve its robustness under small samples. A simple simulation study shows that this third moment adjusted statistic asymptotically performs on par with previously proposed methods, and at a very small sample size offers superior Type I error rates under a properly specified model. Data from Mardia, Kent and Bibby’s study of students tested for their ability in five content areas that were either open or closed book were used to illustrate the real-world performance of this statistic. PMID:23144511
Addressing Angular Single-Event Effects in the Estimation of On-Orbit Error Rates
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, David S.; Swift, Gary M.; Wirthlin, Michael J.
2015-12-01
Our study describes complications introduced by angular direct ionization events on space error rate predictions. In particular, prevalence of multiple-cell upsets and a breakdown in the application of effective linear energy transfer in modern-scale devices can skew error rates approximated from currently available estimation models. Moreover, this paper highlights the importance of angular testing and proposes a methodology to extend existing error estimation tools to properly consider angular strikes in modern-scale devices. Finally, these techniques are illustrated with test data provided from a modern 28 nm SRAM-based device.
Skew-t partially linear mixed-effects models for AIDS clinical studies.
Lu, Tao
2016-01-01
We propose partially linear mixed-effects models with asymmetry and missingness to investigate the relationship between two biomarkers in clinical studies. The proposed models take into account irregular time effects commonly observed in clinical studies under a semiparametric model framework. In addition, commonly assumed symmetric distributions for model errors are substituted by asymmetric distribution to account for skewness. Further, informative missing data mechanism is accounted for. A Bayesian approach is developed to perform parameter estimation simultaneously. The proposed model and method are applied to an AIDS dataset and comparisons with alternative models are performed.
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
Di Lecce, Giuseppe; Arranz, Sara; Jáuregui, Olga; Tresserra-Rimbau, Anna; Quifer-Rada, Paola; Lamuela-Raventós, Rosa M
2014-02-15
This paper describes for the first time a complete characterisation of the phenolic compounds in different anatomical parts of the Albariño grape. The application of high-performance liquid chromatography coupled with two complementary techniques, hybrid quadrupole time-of-flight and triple-quadrupole mass spectrometry, allowed the phenolic composition of the Albariño grape to be unambiguously identified and quantified. A more complete phenolic profile was obtained by product ion and precursor ion scans, while a neutral loss scan at 152 u enabled a fast screening of procyanidin dimers, trimers and their galloylated derivatives. The compounds were confirmed by accurate mass measurements in QqToF-MS and QqToF-MS/MS modes at high resolution, and good fits were obtained for all investigated ions, with errors ranging from 0.2 to 4.5 mDa. To the best of our knowledge, two flavanol monomer hexosides were detected in the grape berry for the first time. Copyright © 2013 Elsevier Ltd. All rights reserved.
Tolerance analyses of a quadrupole magnet for advanced photon source upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J., E-mail: Jieliu@aps.anl.gov; Jaski, M., E-mail: jaski@aps.anl.gov; Borland, M., E-mail: borland@aps.anl.gov
2016-07-27
Given physics requirements, the mechanical fabrication and assembly tolerances for storage ring magnets can be calculated using analytical methods [1, 2]. However, this method is not easy for complicated magnet designs [1]. In this paper, a novel method is proposed to determine fabrication and assembly tolerances consistent with physics requirements, through a combination of magnetic and mechanical tolerance analyses. In this study, finite element analysis using OPERA is conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupole magnet and to determine the allowable tolerances to achieve the specified magnetic performances. Based onmore » the study, allowable fabrication and assembly tolerances for the quadrupole assembly are specified for the mechanical design of the quadrupole magnet. Next, to achieve the required assembly level tolerances, mechanical tolerance stackup analyses using a 3D tolerance analysis package are carried out to determine the part and subassembly level fabrication tolerances. This method can be used to determine the tolerances for design of other individual magnets and of magnet strings.« less
NASA Astrophysics Data System (ADS)
Cheng, Qin-Bo; Chen, Xi; Xu, Chong-Yu; Reinhardt-Imjela, Christian; Schulte, Achim
2014-11-01
In this study, the likelihood functions for uncertainty analysis of hydrological models are compared and improved through the following steps: (1) the equivalent relationship between the Nash-Sutcliffe Efficiency coefficient (NSE) and the likelihood function with Gaussian independent and identically distributed residuals is proved; (2) a new estimation method of the Box-Cox transformation (BC) parameter is developed to improve the effective elimination of the heteroscedasticity of model residuals; and (3) three likelihood functions-NSE, Generalized Error Distribution with BC (BC-GED) and Skew Generalized Error Distribution with BC (BC-SGED)-are applied for SWAT-WB-VSA (Soil and Water Assessment Tool - Water Balance - Variable Source Area) model calibration in the Baocun watershed, Eastern China. Performances of calibrated models are compared using the observed river discharges and groundwater levels. The result shows that the minimum variance constraint can effectively estimate the BC parameter. The form of the likelihood function significantly impacts on the calibrated parameters and the simulated results of high and low flow components. SWAT-WB-VSA with the NSE approach simulates flood well, but baseflow badly owing to the assumption of Gaussian error distribution, where the probability of the large error is low, but the small error around zero approximates equiprobability. By contrast, SWAT-WB-VSA with the BC-GED or BC-SGED approach mimics baseflow well, which is proved in the groundwater level simulation. The assumption of skewness of the error distribution may be unnecessary, because all the results of the BC-SGED approach are nearly the same as those of the BC-GED approach.
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
NASA Astrophysics Data System (ADS)
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
A Bayesian estimate of the concordance correlation coefficient with skewed data.
Feng, Dai; Baumgartner, Richard; Svetnik, Vladimir
2015-01-01
Concordance correlation coefficient (CCC) is one of the most popular scaled indices used to evaluate agreement. Most commonly, it is used under the assumption that data is normally distributed. This assumption, however, does not apply to skewed data sets. While methods for the estimation of the CCC of skewed data sets have been introduced and studied, the Bayesian approach and its comparison with the previous methods has been lacking. In this study, we propose a Bayesian method for the estimation of the CCC of skewed data sets and compare it with the best method previously investigated. The proposed method has certain advantages. It tends to outperform the best method studied before when the variation of the data is mainly from the random subject effect instead of error. Furthermore, it allows for greater flexibility in application by enabling incorporation of missing data, confounding covariates, and replications, which was not considered previously. The superiority of this new approach is demonstrated using simulation as well as real-life biomarker data sets used in an electroencephalography clinical study. The implementation of the Bayesian method is accessible through the Comprehensive R Archive Network. Copyright © 2015 John Wiley & Sons, Ltd.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wagner, Bob
Electron-ion colliders (EIC) have been identified as an ideal tool to study the next frontier of nuclear physics – the gluon force that holds the building blocks of matter together, and which is a fundamental component of the theory of Quantum Chromodynamics (QCD). Future electron-ion colliders under consideration can be based on the Energy Recovery Linac (ERL) architecture. The beam lines for this architecture could be built of the newly developed Non-Scaling Fixed Field Alternating Gradient (NS FFAG) structure, so that they can transfer multiple energies within the same aperture. This structure allows for the use of compact, economical quadupolemore » permanent magnets. In this SBIR, we propose to design and to manufacture prototype quadrupole permanent magnets of focusing/defocusing combined function for use in this beam line. For our SBIR project, we proposed to design and build the focusing/defocusing quadrupole with a gradient strength of 50 T/m and with a beam gap of 16mm. The proposed permanent magnet material is SmCo because of its higher radiation resistance as compared to NdBFe2. The use of permanent magnets will reduce the overall cost. For Phase I, we took a recent design by Dr. Dejan Trbojevic, and reran Tosca code on the design to optimize the iron yoke with respect to the thickness of SmCo. We then fabricated one prototype focusing/defocusing combined function quadruple and measured field quality dG/Go. Our plan for Phase II is that, based on our Phase I prototype experience, we shall improve the design and fabricate a production quadruple, and design and incorporate coils for skew dipoles and normal quadrupole correctors, etc. In addition, we shall fabricate enough quadrupoles for one cell. The development of quadrupole permanent magnets is of fundamental importance for there application in the future electron-ion colliders. This accelerator structure will also advance the development of muon accelerators and allow for the development of compact, simplified, less expensive proton accelerators which will promote their use in areas such as proton cancer therapy, and for high-power proton drivers for tritium and neutron production, waste transmutation, driving a sub-critical nuclear reactor to produce energy, cargo contain inspection, and radioisotope production. Proton cancer therapy has been identified as a particularly attractive and viable commercial application for the immediate future.« less
Derivation and experimental verification of clock synchronization theory
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.
1994-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Mid-Point Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the clock system's behavior. It is found that a 100% penalty is paid to tolerate worst case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as 3 clock ticks. Clock skew grows to 6 clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst case conditions. conditions.
Experimental validation of clock synchronization algorithms
NASA Technical Reports Server (NTRS)
Palumbo, Daniel L.; Graham, R. Lynn
1992-01-01
The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.
Research on the optimal structure configuration of dither RLG used in skewed redundant INS
NASA Astrophysics Data System (ADS)
Gao, Chunfeng; Wang, Qi; Wei, Guo; Long, Xingwu
2016-05-01
The actual combat effectiveness of weapon equipment is restricted by the performance of Inertial Navigation System (INS), especially in high reliability required situations such as fighter, satellite and submarine. Through the use of skewed sensor geometries, redundant technique has been applied to reduce the cost and improve the reliability of the INS. In this paper, the structure configuration and the inertial sensor characteristics of Skewed Redundant Strapdown Inertial Navigation System (SRSINS) using dithered Ring Laser Gyroscope (RLG) are analyzed. For the dither coupling effects of the dither gyro, the system measurement errors can be amplified either the individual gyro dither frequency is near one another or the structure of the SRSINS is unreasonable. Based on the characteristics of RLG, the research on coupled vibration of dithered RLG in SRSINS is carried out. On the principle of optimal navigation performance, optimal reliability and optimal cost-effectiveness, the comprehensive evaluation scheme of the inertial sensor configuration of SRINS is given.
A Skew-Normal Mixture Regression Model
ERIC Educational Resources Information Center
Liu, Min; Lin, Tsung-I
2014-01-01
A challenge associated with traditional mixture regression models (MRMs), which rest on the assumption of normally distributed errors, is determining the number of unobserved groups. Specifically, even slight deviations from normality can lead to the detection of spurious classes. The current work aims to (a) examine how sensitive the commonly…
ERIC Educational Resources Information Center
Browning, Mark; Lehman, James D.
1991-01-01
Authors respond to criticisms by Smith in the same issue and defend their use of the term "gene" and "misconception." Authors indicate that they did not believe that the use of computers significantly skewed their data concerning student errors. (PR)
Design of an rf quadrupole for Landau damping
NASA Astrophysics Data System (ADS)
Papke, K.; Grudiev, A.
2017-08-01
The recently proposed superconducting quadrupole resonator for Landau damping in accelerators is subjected to a detailed design study. The optimization process of two different cavity types is presented following the requirements of the High Luminosity Large Hadron Collider (HL-LHC) with the main focus on quadrupolar strength, surface peak fields, and impedance. The lower order and higher order mode (LOM and HOM) spectrum of the optimized cavities is investigated and different approaches for their damping are proposed. On the basis of an example the first two higher order multipole errors are calculated. Likewise on this example the required rf power and optimal external quality factor for the input coupler is derived.
Chronopoulos, D
2017-01-01
A systematic expression quantifying the wave energy skewing phenomenon as a function of the mechanical characteristics of a non-isotropic structure is derived in this study. A structure of arbitrary anisotropy, layering and geometric complexity is modelled through Finite Elements (FEs) coupled to a periodic structure wave scheme. A generic approach for efficiently computing the angular sensitivity of the wave slowness for each wave type, direction and frequency is presented. The approach does not involve any finite differentiation scheme and is therefore computationally efficient and not prone to the associated numerical errors. Copyright © 2016 Elsevier B.V. All rights reserved.
The sound of moving bodies. Ph.D. Thesis - Cambridge Univ.
NASA Technical Reports Server (NTRS)
Brentner, Kenneth Steven
1990-01-01
The importance of the quadrupole source term in the Ffowcs, Williams, and Hawkings (FWH) equation was addressed. The quadrupole source contains fundamental components of the complete fluid mechanics problem, which are ignored only at the risk of error. The results made it clear that any application of the acoustic analogy should begin with all of the source terms in the FWH theory. The direct calculation of the acoustic field as part of the complete unsteady fluid mechanics problem using CFD is considered. It was shown that aeroelastic calculation can indeed be made with CFD codes. The results indicate that the acoustic field is the most susceptible component of the computation to numerical error. Therefore, the ability to measure the damping of acoustic waves is absolutely essential both to develop acoustic computations. Essential groundwork for a new approach to the problem of sound generation by moving bodies is presented. This new computational acoustic approach holds the promise of solving many problems hitherto pushed aside.
Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment
NASA Technical Reports Server (NTRS)
Orient, O. J.; Srivastava, S. K.
1985-01-01
A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.
NASA Astrophysics Data System (ADS)
Guelton, Nicolas; Lopès, Catherine; Sordini, Henri
2016-08-01
In hot dip galvanizing lines, strip bending around the sink roll generates a flatness defect called crossbow. This defect affects the cross coating weight distribution by changing the knife-to-strip distance along the strip width and requires a significant increase in coating target to prevent any risk of undercoating. The already-existing coating weight control system succeeds in eliminating both average and skew coating errors but cannot do anything against crossbow coating errors. It has therefore been upgraded with a flatness correction function which takes advantage of the possibility of controlling the electromagnetic stabilizer. The basic principle is to split, for every gage scan, the coating weight cross profile of the top and bottom sides into two, respectively, linear and non-linear components. The linear component is used to correct the skew error by realigning the knives with the strip, while the non-linear component is used to distort the strip in the stabilizer in such a way that the strip is kept flat between the knives. Industrial evaluation is currently in progress but the first results have already shown that the strip can be significantly flattened between the knives and the production tolerances subsequently tightened without compromising quality.
Nixon, Richard M; Wonderling, David; Grieve, Richard D
2010-03-01
Cost-effectiveness analyses (CEA) alongside randomised controlled trials commonly estimate incremental net benefits (INB), with 95% confidence intervals, and compute cost-effectiveness acceptability curves and confidence ellipses. Two alternative non-parametric methods for estimating INB are to apply the central limit theorem (CLT) or to use the non-parametric bootstrap method, although it is unclear which method is preferable. This paper describes the statistical rationale underlying each of these methods and illustrates their application with a trial-based CEA. It compares the sampling uncertainty from using either technique in a Monte Carlo simulation. The experiments are repeated varying the sample size and the skewness of costs in the population. The results showed that, even when data were highly skewed, both methods accurately estimated the true standard errors (SEs) when sample sizes were moderate to large (n>50), and also gave good estimates for small data sets with low skewness. However, when sample sizes were relatively small and the data highly skewed, using the CLT rather than the bootstrap led to slightly more accurate SEs. We conclude that while in general using either method is appropriate, the CLT is easier to implement, and provides SEs that are at least as accurate as the bootstrap. (c) 2009 John Wiley & Sons, Ltd.
Micro-Bunched Beam Production at FAST for Narrow Band THz Generation Using a Slit-Mask
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hyun, J.; Crawford, D.; Edstrom Jr, D.
We discuss simulations and experiments on creating micro-bunch beams for generating narrow band THz radiation at the Fermilab Accelerator Science and Technology (FAST) facility. The low-energy electron beamline at FAST consists of a photoinjector-based RF gun, two Lband superconducting accelerating cavities, a chicane, and a beam dump. The electron bunches are lengthened with cavity phases set off-crest for better longitudinal separation and then micro-bunched with a slit-mask installed in the chicane. We carried out the experiments with 30 MeV electron beams and detected signals of the micro-bunching using a skew quadrupole magnet in the chicane. In this paper, the detailsmore » of micro-bunch beam production, the detection of micro-bunching and comparison with simulations are described.« less
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Lamontagne, Jonathan R.; Stedinger, Jery R.; Berenbrock, Charles; Veilleux, Andrea G.; Ferris, Justin C.; Knifong, Donna L.
2012-01-01
Flood-frequency information is important in the Central Valley region of California because of the high risk of catastrophic flooding. Most traditional flood-frequency studies focus on peak flows, but for the assessment of the adequacy of reservoirs, levees, other flood control structures, sustained flood flow (flood duration) frequency data are needed. This study focuses on rainfall or rain-on-snow floods, rather than the annual maximum, because rain events produce the largest floods in the region. A key to estimating flood-duration frequency is determining the regional skew for such data. Of the 50 sites used in this study to determine regional skew, 28 sites were considered to have little to no significant regulated flows, and for the 22 sites considered significantly regulated, unregulated daily flow data were synthesized by using reservoir storage changes and diversion records. The unregulated, annual maximum rainfall flood flows for selected durations (1-day, 3-day, 7-day, 15-day, and 30-day) for all 50 sites were furnished by the U.S. Army Corps of Engineers. Station skew was determined by using the expected moments algorithm program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual flood-duration data. Bayesian generalized least squares regression procedures used in earlier studies were modified to address problems caused by large cross correlations among concurrent rainfall floods in California and to address the extensive censoring of low outliers at some sites, by using the new expected moments algorithm for fitting the LP3 distribution to rainfall flood-duration data. To properly account for these problems and to develop suitable regional-skew regression models and regression diagnostics, a combination of ordinary least squares, weighted least squares, and Bayesian generalized least squares regressions were adopted. This new methodology determined that a nonlinear model relating regional skew to mean basin elevation was the best model for each flood duration. The regional-skew values ranged from -0.74 for a flood duration of 1-day and a mean basin elevation less than 2,500 feet to values near 0 for a flood duration of 7-days and a mean basin elevation greater than 4,500 feet. This relation between skew and elevation reflects the interaction of snow and rain, which increases with increased elevation. The regional skews are more accurate, and the mean squared errors are less than in the Interagency Advisory Committee on Water Data's National skew map of Bulletin 17B.
NASA Technical Reports Server (NTRS)
Li, C. J.; Devries, W. R.; Ludema, K. C.
1983-01-01
Measurements made with a stylus surface tracer which provides a digitized representation of a surface profile are discussed. Parameters are defined to characterize the height (e.g., RMS roughness, skewness, and kurtosis) and length (e.g., autocorrelation) of the surface topography. These are applied to the characterization of crank shaft journals which were manufactured by different grinding and lopping procedures known to give significant differences in crank shaft bearing life. It was found that three parameters (RMS roughness, skewness, and kurtosis) are necessary to adequately distinguish the character of these surfaces. Every surface specimen has a set of values for these three parameters. They can be regarded as a set coordinate in a space constituted by three characteristics axes. The various journal surfaces can be classified along with the determination of a proper wavelength cutoff (0.25 mm) by using a method of separated subspace. The finite radius of the stylus used for profile tracing gives an inherent measurement error as it passes over the fine structure of the surface. A mathematical model is derived to compensate for this error.
Inference of median difference based on the Box-Cox model in randomized clinical trials.
Maruo, K; Isogawa, N; Gosho, M
2015-05-10
In randomized clinical trials, many medical and biological measurements are not normally distributed and are often skewed. The Box-Cox transformation is a powerful procedure for comparing two treatment groups for skewed continuous variables in terms of a statistical test. However, it is difficult to directly estimate and interpret the location difference between the two groups on the original scale of the measurement. We propose a helpful method that infers the difference of the treatment effect on the original scale in a more easily interpretable form. We also provide statistical analysis packages that consistently include an estimate of the treatment effect, covariance adjustments, standard errors, and statistical hypothesis tests. The simulation study that focuses on randomized parallel group clinical trials with two treatment groups indicates that the performance of the proposed method is equivalent to or better than that of the existing non-parametric approaches in terms of the type-I error rate and power. We illustrate our method with cluster of differentiation 4 data in an acquired immune deficiency syndrome clinical trial. Copyright © 2015 John Wiley & Sons, Ltd.
Multiple electrokinetic actuators for feedback control of colloidal crystal size.
Juárez, Jaime J; Mathai, Pramod P; Liddle, J Alexander; Bevan, Michael A
2012-10-21
We report a feedback control method to precisely target the number of colloidal particles in quasi-2D ensembles and their subsequent assembly into crystals in a quadrupole electrode. Our approach relies on tracking the number of particles within a quadrupole electrode, which is used in a real-time feedback control algorithm to dynamically actuate competing electrokinetic transport mechanisms. Particles are removed from the quadrupole using DC-field mediated electrophoretic-electroosmotic transport, while high-frequency AC-field mediated dielectrophoretic transport is used to concentrate and assemble colloidal crystals. Our results show successful control of the size of crystals containing 20 to 250 colloidal particles with less than 10% error. Assembled crystals are characterized by their radius of gyration, crystallinity, and number of edge particles, and demonstrate the expected size-dependent properties. Our findings demonstrate successful ensemble feedback control of the assembly of different sized colloidal crystals using multiple actuators, which has broad implications for control over nano- and micro- scale assembly processes involving colloidal components.
ACCELERATORS: Beam based alignment of the SSRF storage ring
NASA Astrophysics Data System (ADS)
Zhang, Man-Zhou; Li, Hao-Hu; Jiang, Bo-Cheng; Liu, Gui-Min; Li, De-Ming
2009-04-01
There are 140 beam position monitors (BPMs) in the Shanghai Synchrotron Radiation Facility (SSRF) storage ring used for measuring the closed orbit. As the BPM pickup electrodes are assembled directly on the vacuum chamber, it is important to calibrate the electrical center offset of the BPM to an adjacent quadrupole magnetic center. A beam based alignment (BBA) method which varies individual quadrupole magnet strength and observes its effects on the orbit is used to measure the BPM offsets in both the horizontal and vertical planes. It is a completely automated technique with various data processing methods. There are several parameters such as the strength change of the correctors and the quadrupoles which should be chosen carefully in real measurement. After several rounds of BBA measurement and closed orbit correction, these offsets are set to an accuracy better than 10 μm. In this paper we present the method of beam based calibration of BPMs, the experimental results of the SSRF storage ring, and the error analysis.
Statistical analysis of the 70 meter antenna surface distortions
NASA Technical Reports Server (NTRS)
Kiedron, K.; Chian, C. T.; Chuang, K. L.
1987-01-01
Statistical analysis of surface distortions of the 70 meter NASA/JPL antenna, located at Goldstone, was performed. The purpose of this analysis is to verify whether deviations due to gravity loading can be treated as quasi-random variables with normal distribution. Histograms of the RF pathlength error distribution for several antenna elevation positions were generated. The results indicate that the deviations from the ideal antenna surface are not normally distributed. The observed density distribution for all antenna elevation angles is taller and narrower than the normal density, which results in large positive values of kurtosis and a significant amount of skewness. The skewness of the distribution changes from positive to negative as the antenna elevation changes from zenith to horizon.
Error Correction for the JLEIC Ion Collider Ring
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wei, Guohui; Morozov, Vasiliy; Lin, Fanglei
2016-05-01
The sensitivity to misalignment, magnet strength error, and BPM noise is investigated in order to specify design tolerances for the ion collider ring of the Jefferson Lab Electron Ion Collider (JLEIC) project. Those errors, including horizontal, vertical, longitudinal displacement, roll error in transverse plane, strength error of main magnets (dipole, quadrupole, and sextupole), BPM noise, and strength jitter of correctors, cause closed orbit distortion, tune change, beta-beat, coupling, chromaticity problem, etc. These problems generally reduce the dynamic aperture at the Interaction Point (IP). According to real commissioning experiences in other machines, closed orbit correction, tune matching, beta-beat correction, decoupling, andmore » chromaticity correction have been done in the study. Finally, we find that the dynamic aperture at the IP is restored. This paper describes that work.« less
NASA Technical Reports Server (NTRS)
Hruby, R. J.; Bjorkman, W. S.; Schmidt, S. F.; Carestia, R. A.
1979-01-01
Algorithms were developed that attempt to identify which sensor in a tetrad configuration has experienced a step failure. An algorithm is also described that provides a measure of the confidence with which the correct identification was made. Experimental results are presented from real-time tests conducted on a three-axis motion facility utilizing an ortho-skew tetrad strapdown inertial sensor package. The effects of prediction errors and of quantization on correct failure identification are discussed as well as an algorithm for detecting second failures through prediction.
Agogo, George O.
2017-01-01
Measurement error in exposure variables is a serious impediment in epidemiological studies that relate exposures to health outcomes. In nutritional studies, interest could be in the association between long-term dietary intake and disease occurrence. Long-term intake is usually assessed with food frequency questionnaire (FFQ), which is prone to recall bias. Measurement error in FFQ-reported intakes leads to bias in parameter estimate that quantifies the association. To adjust for bias in the association, a calibration study is required to obtain unbiased intake measurements using a short-term instrument such as 24-hour recall (24HR). The 24HR intakes are used as response in regression calibration to adjust for bias in the association. For foods not consumed daily, 24HR-reported intakes are usually characterized by excess zeroes, right skewness, and heteroscedasticity posing serious challenge in regression calibration modeling. We proposed a zero-augmented calibration model to adjust for measurement error in reported intake, while handling excess zeroes, skewness, and heteroscedasticity simultaneously without transforming 24HR intake values. We compared the proposed calibration method with the standard method and with methods that ignore measurement error by estimating long-term intake with 24HR and FFQ-reported intakes. The comparison was done in real and simulated datasets. With the 24HR, the mean increase in mercury level per ounce fish intake was about 0.4; with the FFQ intake, the increase was about 1.2. With both calibration methods, the mean increase was about 2.0. Similar trend was observed in the simulation study. In conclusion, the proposed calibration method performs at least as good as the standard method. PMID:27704599
NASA Astrophysics Data System (ADS)
You, Yue; Zhang, Wenjia; Sun, Lin; Du, Jiangbing; Liang, Chenyu; Yang, Fan; He, Zuyuan
2018-03-01
The vertical cavity surface emitting laser (VCSEL)-based multimode optical transceivers enabled by pulse amplitude modulation (PAM)-4 will be commercialized in near future to meet the 400-Gbps standard short reach optical interconnects. It is still challenging to achieve over 56/112-Gbps with the multilevel signaling as the multimode property of the device and link would introduce the nonlinear temporal response for the different levels. In this work, we scrutinize the distortions that relates to the multilevel feature of PAM-4 modulation, and propose an effective feedback equalization scheme for 56-Gbps VCSEL-based PAM-4 optical interconnects system to mitigate the distortions caused by eye timing-skew and nonlinear power-dependent noise. Level redistribution at Tx side is theoretically modeled and constructed to achieve equivalent symbol error ratios (SERs) of four levels and improved BER performance. The cause of the eye skewing and the mitigation approach are also simulated at 100-Gbps and experimentally investigated at 56-Gbps. The results indicate more than 2-dB power penalty improvement has been achieved by using such a distortion aware equalizer.
Temperature-Compensated Clock Skew Adjustment
Castillo-Secilla, Jose María; Palomares, Jose Manuel; Olivares, Joaquín
2013-01-01
This work analyzes several drift compensation mechanisms in wireless sensor networks (WSN). Temperature is an environmental factor that greatly affects oscillators shipped in every WSN mote. This behavior creates the need of improving drift compensation mechanisms in synchronization protocols. Using the Flooding Time Synchronization Protocol (FTSP), this work demonstrates that crystal oscillators are affected by temperature variations. Thus, the influence of temperature provokes a low performance of FTSP in changing conditions of temperature. This article proposes an innovative correction factor that minimizes the impact of temperature in the clock skew. By means of this factor, two new mechanisms are proposed in this paper: the Adjusted Temperature (AT) and the Advanced Adjusted Temperature (A2T). These mechanisms have been combined with FTSP to produce AT-FTSP and A2T-FTSP Both have been tested in a network of TelosB motes running TinyOS. Results show that both AT-FTSP and A2T-FTSP improve the average synchronization errors compared to FTSP and other temperature-compensated protocols (Environment-Aware Clock Skew Estimation and Synchronization for WSN (EACS) and Temperature Compensated Time Synchronization (TCTS)). PMID:23966192
Clark, Jeremy S C; Kaczmarczyk, Mariusz; Mongiało, Zbigniew; Ignaczak, Paweł; Czajkowski, Andrzej A; Klęsk, Przemysław; Ciechanowicz, Andrzej
2013-08-01
Gompertz-related distributions have dominated mortality studies for 187 years. However, nonrelated distributions also fit well to mortality data. These compete with the Gompertz and Gompertz-Makeham data when applied to data with varying extents of truncation, with no consensus as to preference. In contrast, Gaussian-related distributions are rarely applied, despite the fact that Lexis in 1879 suggested that the normal distribution itself fits well to the right of the mode. Study aims were therefore to compare skew-t fits to Human Mortality Database data, with Gompertz-nested distributions, by implementing maximum likelihood estimation functions (mle2, R package bbmle; coding given). Results showed skew-t fits obtained lower Bayesian information criterion values than Gompertz-nested distributions, applied to low-mortality country data, including 1711 and 1810 cohorts. As Gaussian-related distributions have now been found to have almost universal application to error theory, one conclusion could be that a Gaussian-related distribution might replace Gompertz-related distributions as the basis for mortality studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tocchini-Valentini, Domenico; Barnard, Michael; Bennett, Charles L.
2012-10-01
We present a method to extract the redshift-space distortion {beta} parameter in configuration space with a minimal set of cosmological assumptions. We show that a novel combination of the observed monopole and quadrupole correlation functions can remove efficiently the impact of mild nonlinearities and redshift errors. The method offers a series of convenient properties: it does not depend on the theoretical linear correlation function, the mean galaxy density is irrelevant, only convolutions are used, and there is no explicit dependence on linear bias. Analyses based on dark matter N-body simulations and Fisher matrix demonstrate that errors of a few percentmore » on {beta} are possible with a full-sky, 1 (h {sup -1} Gpc){sup 3} survey centered at a redshift of unity and with negligible shot noise. We also find a baryonic feature in the normalized quadrupole in configuration space that should complicate the extraction of the growth parameter from the linear theory asymptote, but that does not have a major impact on our method.« less
Beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz radio frequency quadrupole accelerator
NASA Astrophysics Data System (ADS)
Gaur, Rahul; Kumar, Vinit
2018-05-01
We present the beam dynamics and electromagnetic studies of a 3 MeV, 325 MHz H- radio frequency quadrupole (RFQ) accelerator for the proposed Indian Spallation Neutron Source project. We have followed a design approach, where the emittance growth and the losses are minimized by keeping the tune depression ratio larger than 0.5. The transverse cross-section of RFQ is designed at a frequency lower than the operating frequency, so that the tuners have their nominal position inside the RFQ cavity. This has resulted in an improvement of the tuning range, and the efficiency of tuners to correct the field errors in the RFQ. The vane-tip modulations have been modelled in CST-MWS code, and its effect on the field flatness and the resonant frequency has been studied. The deterioration in the field flatness due to vane-tip modulations is reduced to an acceptable level with the help of tuners. Details of the error study and the higher order mode study along with mode stabilization technique are also described in the paper.
A concept for canceling the leakage field inside the stored beam chamber of a septum magnet
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abliz, M.; Jaski, M.; Xiao, A.
Here, the Advanced Photon Source is in the process of upgrading its storage ring from a double-bend to a multi-bend lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U to keep a constant beam current and to enable a small dynamic aperture. A novel concept that cancels out the effect of leakage field inside the stored beam chamber was introduced in the design of the septum magnet. As a result, the horizontal deflecting angle of the stored beam was reduced to below 1 µrad with a 2 mm septum thickness andmore » 1.06 T normal injection field. The concept helped to minimize the integrated skew quadrupole field and normal sextupole fields inside stored beam chamber as well.« less
Beam echoes in the presence of coupling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gross, Axel
2017-10-03
Transverse beam echoes could provide a new technique of measuring diusion characteristics orders of magnitude faster than the current methods; however, their interaction with many accelerator parameters is poorly understood. Using a program written in C, we explored the relationship between coupling and echo strength. We found that echoes could be generated in both dimensions, even with a dipole kick in only one dimension. We found that the echo eects are not destroyed even when there is strong coupling, falling o only at extremely high coupling values. We found that at intermediate values of skew quadrupole strength, the decoherence timemore » of the beam is greatly increased, causing a destruction of the echo eects. We found that this is caused by a narrowing of the tune width of the particles. Results from this study will help to provide recommendations to IOTA (Integrable Optics Test Accelerator) for their upcoming echo experiment.« less
A concept for canceling the leakage field inside the stored beam chamber of a septum magnet
Abliz, M.; Jaski, M.; Xiao, A.; ...
2017-12-20
Here, the Advanced Photon Source is in the process of upgrading its storage ring from a double-bend to a multi-bend lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U to keep a constant beam current and to enable a small dynamic aperture. A novel concept that cancels out the effect of leakage field inside the stored beam chamber was introduced in the design of the septum magnet. As a result, the horizontal deflecting angle of the stored beam was reduced to below 1 µrad with a 2 mm septum thickness andmore » 1.06 T normal injection field. The concept helped to minimize the integrated skew quadrupole field and normal sextupole fields inside stored beam chamber as well.« less
Correction of clock errors in seismic data using noise cross-correlations
NASA Astrophysics Data System (ADS)
Hable, Sarah; Sigloch, Karin; Barruol, Guilhem; Hadziioannou, Céline
2017-04-01
Correct and verifiable timing of seismic records is crucial for most seismological applications. For seismic land stations, frequent synchronization of the internal station clock with a GPS signal should ensure accurate timing, but loss of GPS synchronization is a common occurrence, especially for remote, temporary stations. In such cases, retrieval of clock timing has been a long-standing problem. The same timing problem applies to Ocean Bottom Seismometers (OBS), where no GPS signal can be received during deployment and only two GPS synchronizations can be attempted upon deployment and recovery. If successful, a skew correction is usually applied, where the final timing deviation is interpolated linearly across the entire operation period. If GPS synchronization upon recovery fails, then even this simple and unverified, first-order correction is not possible. In recent years, the usage of cross-correlation functions (CCFs) of ambient seismic noise has been demonstrated as a clock-correction method for certain network geometries. We demonstrate the great potential of this technique for island stations and OBS that were installed in the course of the Réunion Hotspot and Upper Mantle - Réunions Unterer Mantel (RHUM-RUM) project in the western Indian Ocean. Four stations on the island La Réunion were affected by clock errors of up to several minutes due to a missing GPS signal. CCFs are calculated for each day and compared with a reference cross-correlation function (RCF), which is usually the average of all CCFs. The clock error of each day is then determined from the measured shift between the daily CCFs and the RCF. To improve the accuracy of the method, CCFs are computed for several land stations and all three seismic components. Averaging over these station pairs and their 9 component pairs reduces the standard deviation of the clock errors by a factor of 4 (from 80 ms to 20 ms). This procedure permits a continuous monitoring of clock errors where small clock drifts (1 ms/day) as well as large clock jumps (6 min) are identified. The same method is applied to records of five OBS stations deployed within a radius of 150 km around La Réunion. The assumption of a linear clock drift is verified by correlating OBS for which GPS-based skew corrections were available with land stations. For two OBS stations without skew estimates, we find clock drifts of 0.9 ms/day and 0.4 ms/day. This study salvages expensive seismic records from remote regions that would be otherwise lost for seismicity or tomography studies.
Unconventional Rotor Power Response to Yaw Error Variations
Schreck, S. J.; Schepers, J. G.
2014-12-16
Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.
Diagnostics for insufficiencies of posterior calculations in Bayesian signal inference.
Dorn, Sebastian; Oppermann, Niels; Ensslin, Torsten A
2013-11-01
We present an error-diagnostic validation method for posterior distributions in Bayesian signal inference, an advancement of a previous work. It transfers deviations from the correct posterior into characteristic deviations from a uniform distribution of a quantity constructed for this purpose. We show that this method is able to reveal and discriminate several kinds of numerical and approximation errors, as well as their impact on the posterior distribution. For this we present four typical analytical examples of posteriors with incorrect variance, skewness, position of the maximum, or normalization. We show further how this test can be applied to multidimensional signals.
Friedrich, Joachim; Coriani, Sonia; Helgaker, Trygve; Dolg, Michael
2009-10-21
A fully automated parallelized implementation of the incremental scheme for coupled-cluster singles-and-doubles (CCSD) energies has been extended to treat molecular (unrelaxed) first-order one-electron properties such as the electric dipole and quadrupole moments. The convergence and accuracy of the incremental approach for the dipole and quadrupole moments have been studied for a variety of chemically interesting systems. It is found that the electric dipole moment can be obtained to within 5% and 0.5% accuracy with respect to the exact CCSD value at the third and fourth orders of the expansion, respectively. Furthermore, we find that the incremental expansion of the quadrupole moment converges to the exact result with increasing order of the expansion: the convergence of nonaromatic compounds is fast with errors less than 16 mau and less than 1 mau at third and fourth orders, respectively (1 mau=10(-3)ea(0)(2)); the aromatic compounds converge slowly with maximum absolute deviations of 174 and 72 mau at third and fourth orders, respectively.
Is Coefficient Alpha Robust to Non-Normal Data?
Sheng, Yanyan; Sheng, Zhaohui
2011-01-01
Coefficient alpha has been a widely used measure by which internal consistency reliability is assessed. In addition to essential tau-equivalence and uncorrelated errors, normality has been noted as another important assumption for alpha. Earlier work on evaluating this assumption considered either exclusively non-normal error score distributions, or limited conditions. In view of this and the availability of advanced methods for generating univariate non-normal data, Monte Carlo simulations were conducted to show that non-normal distributions for true or error scores do create problems for using alpha to estimate the internal consistency reliability. The sample coefficient alpha is affected by leptokurtic true score distributions, or skewed and/or kurtotic error score distributions. Increased sample sizes, not test lengths, help improve the accuracy, bias, or precision of using it with non-normal data. PMID:22363306
A log-sinh transformation for data normalization and variance stabilization
NASA Astrophysics Data System (ADS)
Wang, Q. J.; Shrestha, D. L.; Robertson, D. E.; Pokhrel, P.
2012-05-01
When quantifying model prediction uncertainty, it is statistically convenient to represent model errors that are normally distributed with a constant variance. The Box-Cox transformation is the most widely used technique to normalize data and stabilize variance, but it is not without limitations. In this paper, a log-sinh transformation is derived based on a pattern of errors commonly seen in hydrological model predictions. It is suited to applications where prediction variables are positively skewed and the spread of errors is seen to first increase rapidly, then slowly, and eventually approach a constant as the prediction variable becomes greater. The log-sinh transformation is applied in two case studies, and the results are compared with one- and two-parameter Box-Cox transformations.
A Posteriori Correction of Forecast and Observation Error Variances
NASA Technical Reports Server (NTRS)
Rukhovets, Leonid
2005-01-01
Proposed method of total observation and forecast error variance correction is based on the assumption about normal distribution of "observed-minus-forecast" residuals (O-F), where O is an observed value and F is usually a short-term model forecast. This assumption can be accepted for several types of observations (except humidity) which are not grossly in error. Degree of nearness to normal distribution can be estimated by the symmetry or skewness (luck of symmetry) a(sub 3) = mu(sub 3)/sigma(sup 3) and kurtosis a(sub 4) = mu(sub 4)/sigma(sup 4) - 3 Here mu(sub i) = i-order moment, sigma is a standard deviation. It is well known that for normal distribution a(sub 3) = a(sub 4) = 0.
Jamali, Jamshid; Ayatollahi, Seyyed Mohammad Taghi; Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small.
Jafari, Peyman
2017-01-01
Evaluating measurement equivalence (also known as differential item functioning (DIF)) is an important part of the process of validating psychometric questionnaires. This study aimed at evaluating the multiple indicators multiple causes (MIMIC) model for DIF detection when latent construct distribution is nonnormal and the focal group sample size is small. In this simulation-based study, Type I error rates and power of MIMIC model for detecting uniform-DIF were investigated under different combinations of reference to focal group sample size ratio, magnitude of the uniform-DIF effect, scale length, the number of response categories, and latent trait distribution. Moderate and high skewness in the latent trait distribution led to a decrease of 0.33% and 0.47% power of MIMIC model for detecting uniform-DIF, respectively. The findings indicated that, by increasing the scale length, the number of response categories and magnitude DIF improved the power of MIMIC model, by 3.47%, 4.83%, and 20.35%, respectively; it also decreased Type I error of MIMIC approach by 2.81%, 5.66%, and 0.04%, respectively. This study revealed that power of MIMIC model was at an acceptable level when latent trait distributions were skewed. However, empirical Type I error rate was slightly greater than nominal significance level. Consequently, the MIMIC was recommended for detection of uniform-DIF when latent construct distribution is nonnormal and the focal group sample size is small. PMID:28713828
Beam-based calibrations of the BPM offset at C-ADS Injector II
NASA Astrophysics Data System (ADS)
Chen, Wei-Long; Wang, Zhi-Jun; Feng, Chi; Dou, Wei-Ping; Tao, Yue; Jia, Huan; Wang, Wang-Sheng; Liu, Shu-Hui; He, Yuan
2016-07-01
Beam-based BPM offset calibration was carried out for Injector II at the C-ADS demonstration facility at the Institute of Modern Physics (IMP), Chinese Academy of Science (CAS). By using the steering coils integrated in the quadrupoles, the beam orbit can be effectively adjusted and BPM positions recorded at the Medium Energy Beam Transport of the Injector II Linac. The studies were done with a 2 mA, 2.1 MeV proton beam in pulsed mode. During the studies, the “null comparison method” was applied for the calibration. This method is less sensitive to errors compared with the traditional transmission matrix method. In addition, the quadrupole magnet’s center can also be calibrated with this method. Supported by National Natural Science Foundation of China (91426303, 11525523)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Krishnaswamy, J.; Kalsi, S.; Hsieh, H.
1991-01-01
Magnetic measurements performed on the 12-pole trim magnets is described including Hall probe measurements to verify symmetry of the field and, rotating coil measurements to map the multipoles. The rotating coil measurements were carried out using a HP Dynamic Signal Analyzer. Excited as a quadrupole the dominant error multipole is the 20th pole and excited as a sextrupole the dominant error multipole is the 18th pole. Reasonable agreement was found between the Hall probe measurements and the rotating coil measurements. 2 refs., 5 figs.
Arneson, Michael R [Chippewa Falls, WI; Bowman, Terrance L [Sumner, WA; Cornett, Frank N [Chippewa Falls, WI; DeRyckere, John F [Eau Claire, WI; Hillert, Brian T [Chippewa Falls, WI; Jenkins, Philip N [Eau Claire, WI; Ma, Nan [Chippewa Falls, WI; Placek, Joseph M [Chippewa Falls, WI; Ruesch, Rodney [Eau Claire, WI; Thorson, Gregory M [Altoona, WI
2007-07-24
The present invention is directed toward a communications channel comprising a link level protocol, a driver, a receiver, and a canceller/equalizer. The link level protocol provides logic for DC-free signal encoding and recovery as well as supporting many features including CRC error detection and message resend to accommodate infrequent bit errors across the medium. The canceller/equalizer provides equalization for destabilized data signals and also provides simultaneous bi-directional data transfer. The receiver provides bit deskewing by removing synchronization error, or skewing, between data signals. The driver provides impedance controlling by monitoring the characteristics of the communications medium, like voltage or temperature, and providing a matching output impedance in the signal driver so that fewer distortions occur while the data travels across the communications medium.
Introduction to total- and partial-pressure measurements in vacuum systems
NASA Technical Reports Server (NTRS)
Outlaw, R. A.; Kern, F. A.
1989-01-01
An introduction to the fundamentals of total and partial pressure measurement in the vacuum regime (760 x 10 to the -16th power Torr) is presented. The instrument most often used in scientific fields requiring vacuum measurement are discussed with special emphasis on ionization type gauges and quadrupole mass spectrometers. Some attention is also given to potential errors in measurement as well as calibration techniques.
COSMIC SHEAR MEASUREMENT USING AUTO-CONVOLVED IMAGES
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Xiangchong; Zhang, Jun, E-mail: betajzhang@sjtu.edu.cn
2016-10-20
We study the possibility of using quadrupole moments of auto-convolved galaxy images to measure cosmic shear. The autoconvolution of an image corresponds to the inverse Fourier transformation of its power spectrum. The new method has the following advantages: the smearing effect due to the point-spread function (PSF) can be corrected by subtracting the quadrupole moments of the auto-convolved PSF; the centroid of the auto-convolved image is trivially identified; the systematic error due to noise can be directly removed in Fourier space; the PSF image can also contain noise, the effect of which can be similarly removed. With a large ensemblemore » of simulated galaxy images, we show that the new method can reach a sub-percent level accuracy under general conditions, albeit with increasingly large stamp size for galaxies of less compact profiles.« less
Point Charges Optimally Placed to Represent the Multipole Expansion of Charge Distributions
Onufriev, Alexey V.
2013-01-01
We propose an approach for approximating electrostatic charge distributions with a small number of point charges to optimally represent the original charge distribution. By construction, the proposed optimal point charge approximation (OPCA) retains many of the useful properties of point multipole expansion, including the same far-field asymptotic behavior of the approximate potential. A general framework for numerically computing OPCA, for any given number of approximating charges, is described. We then derive a 2-charge practical point charge approximation, PPCA, which approximates the 2-charge OPCA via closed form analytical expressions, and test the PPCA on a set of charge distributions relevant to biomolecular modeling. We measure the accuracy of the new approximations as the RMS error in the electrostatic potential relative to that produced by the original charge distribution, at a distance the extent of the charge distribution–the mid-field. The error for the 2-charge PPCA is found to be on average 23% smaller than that of optimally placed point dipole approximation, and comparable to that of the point quadrupole approximation. The standard deviation in RMS error for the 2-charge PPCA is 53% lower than that of the optimal point dipole approximation, and comparable to that of the point quadrupole approximation. We also calculate the 3-charge OPCA for representing the gas phase quantum mechanical charge distribution of a water molecule. The electrostatic potential calculated by the 3-charge OPCA for water, in the mid-field (2.8 Å from the oxygen atom), is on average 33.3% more accurate than the potential due to the point multipole expansion up to the octupole order. Compared to a 3 point charge approximation in which the charges are placed on the atom centers, the 3-charge OPCA is seven times more accurate, by RMS error. The maximum error at the oxygen-Na distance (2.23 Å ) is half that of the point multipole expansion up to the octupole order. PMID:23861790
Superconducting focusing lenses for the SSR-1 cryomodule of PXIE test stand at Fermilab
DiMarco, J.; Tartaglia, M.; Terechkine, I.
2016-12-05
Five solenoid-based focusing lenses designed for use inside the SSR1 cryomodule of the PXIE test stand at Fermilab have been fabricated and tested. In addition to a focusing solenoid, each lens is equipped with a set of windings that generate magnetic field in the transverse plane and can be used in the steering dipole mode or as a skew quadrupole corrector. The lenses will be installed between superconducting cavities in the cryomodule, so getting sufficiently low fringe magnetic field was one of the main design requirements. Beam dynamics simulations indicated a need for high accuracy positioning of the lenses inmore » the cryomodule, which triggered a study towards understanding uncertainties of the magnetic axis position relative to the geometric features of the lens. Furthermore, this report summarizes the efforts towards certification of the lenses, including results of performance tests, fringe field data, and uncertainty of the magnetic axis position.« less
Superconducting focusing lenses for the SSR-1 cryomodule of PXIE test stand at Fermilab
DOE Office of Scientific and Technical Information (OSTI.GOV)
DiMarco, J.; Tartaglia, M.; Terechkine, I.
Five solenoid-based focusing lenses designed for use inside the SSR1 cryomodule of the PXIE test stand at Fermilab have been fabricated and tested. In addition to a focusing solenoid, each lens is equipped with a set of windings that generate magnetic field in the transverse plane and can be used in the steering dipole mode or as a skew quadrupole corrector. The lenses will be installed between superconducting cavities in the cryomodule, so getting sufficiently low fringe magnetic field was one of the main design requirements. Beam dynamics simulations indicated a need for high accuracy positioning of the lenses inmore » the cryomodule, which triggered a study towards understanding uncertainties of the magnetic axis position relative to the geometric features of the lens. Furthermore, this report summarizes the efforts towards certification of the lenses, including results of performance tests, fringe field data, and uncertainty of the magnetic axis position.« less
Image Augmentation for Object Image Classification Based On Combination of Pre-Trained CNN and SVM
NASA Astrophysics Data System (ADS)
Shima, Yoshihiro
2018-04-01
Neural networks are a powerful means of classifying object images. The proposed image category classification method for object images combines convolutional neural networks (CNNs) and support vector machines (SVMs). A pre-trained CNN, called Alex-Net, is used as a pattern-feature extractor. Alex-Net is pre-trained for the large-scale object-image dataset ImageNet. Instead of training, Alex-Net, pre-trained for ImageNet is used. An SVM is used as trainable classifier. The feature vectors are passed to the SVM from Alex-Net. The STL-10 dataset are used as object images. The number of classes is ten. Training and test samples are clearly split. STL-10 object images are trained by the SVM with data augmentation. We use the pattern transformation method with the cosine function. We also apply some augmentation method such as rotation, skewing and elastic distortion. By using the cosine function, the original patterns were left-justified, right-justified, top-justified, or bottom-justified. Patterns were also center-justified and enlarged. Test error rate is decreased by 0.435 percentage points from 16.055% by augmentation with cosine transformation. Error rates are increased by other augmentation method such as rotation, skewing and elastic distortion, compared without augmentation. Number of augmented data is 30 times that of the original STL-10 5K training samples. Experimental test error rate for the test 8k STL-10 object images was 15.620%, which shows that image augmentation is effective for image category classification.
Pulse sequences for suppressing leakage in single-qubit gate operations
NASA Astrophysics Data System (ADS)
Ghosh, Joydip; Coppersmith, S. N.; Friesen, Mark
2017-06-01
Many realizations of solid-state qubits involve couplings to leakage states lying outside the computational subspace, posing a threat to high-fidelity quantum gate operations. Mitigating leakage errors is especially challenging when the coupling strength is unknown, e.g., when it is caused by noise. Here we show that simple pulse sequences can be used to strongly suppress leakage errors for a qubit embedded in a three-level system. As an example, we apply our scheme to the recently proposed charge quadrupole (CQ) qubit for quantum dots. These results provide a solution to a key challenge for fault-tolerant quantum computing with solid-state elements.
Curran, Janet H.; Barth, Nancy A.; Veilleux, Andrea G.; Ourso, Robert T.
2016-03-16
Estimates of the magnitude and frequency of floods are needed across Alaska for engineering design of transportation and water-conveyance structures, flood-insurance studies, flood-plain management, and other water-resource purposes. This report updates methods for estimating flood magnitude and frequency in Alaska and conterminous basins in Canada. Annual peak-flow data through water year 2012 were compiled from 387 streamgages on unregulated streams with at least 10 years of record. Flood-frequency estimates were computed for each streamgage using the Expected Moments Algorithm to fit a Pearson Type III distribution to the logarithms of annual peak flows. A multiple Grubbs-Beck test was used to identify potentially influential low floods in the time series of peak flows for censoring in the flood frequency analysis.For two new regional skew areas, flood-frequency estimates using station skew were computed for stations with at least 25 years of record for use in a Bayesian least-squares regression analysis to determine a regional skew value. The consideration of basin characteristics as explanatory variables for regional skew resulted in improvements in precision too small to warrant the additional model complexity, and a constant model was adopted. Regional Skew Area 1 in eastern-central Alaska had a regional skew of 0.54 and an average variance of prediction of 0.45, corresponding to an effective record length of 22 years. Regional Skew Area 2, encompassing coastal areas bordering the Gulf of Alaska, had a regional skew of 0.18 and an average variance of prediction of 0.12, corresponding to an effective record length of 59 years. Station flood-frequency estimates for study sites in regional skew areas were then recomputed using a weighted skew incorporating the station skew and regional skew. In a new regional skew exclusion area outside the regional skew areas, the density of long-record streamgages was too sparse for regional analysis and station skew was used for all estimates. Final station flood frequency estimates for all study streamgages are presented for the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities.Regional multiple-regression analysis was used to produce equations for estimating flood frequency statistics from explanatory basin characteristics. Basin characteristics, including physical and climatic variables, were updated for all study streamgages using a geographical information system and geospatial source data. Screening for similar-sized nested basins eliminated hydrologically redundant sites, and screening for eligibility for analysis of explanatory variables eliminated regulated peaks, outburst peaks, and sites with indeterminate basin characteristics. An ordinary least‑squares regression used flood-frequency statistics and basin characteristics for 341 streamgages (284 in Alaska and 57 in Canada) to determine the most suitable combination of basin characteristics for a flood-frequency regression model and to explore regional grouping of streamgages for explaining variability in flood-frequency statistics across the study area. The most suitable model for explaining flood frequency used drainage area and mean annual precipitation as explanatory variables for the entire study area as a region. Final regression equations for estimating the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probability discharge in Alaska and conterminous basins in Canada were developed using a generalized least-squares regression. The average standard error of prediction for the regression equations for the various annual exceedance probabilities ranged from 69 to 82 percent, and the pseudo-coefficient of determination (pseudo-R2) ranged from 85 to 91 percent.The regional regression equations from this study were incorporated into the U.S. Geological Survey StreamStats program for a limited area of the State—the Cook Inlet Basin. StreamStats is a national web-based geographic information system application that facilitates retrieval of streamflow statistics and associated information. StreamStats retrieves published data for gaged sites and, for user-selected ungaged sites, delineates drainage areas from topographic and hydrographic data, computes basin characteristics, and computes flood frequency estimates using the regional regression equations.
Approximate median regression for complex survey data with skewed response.
Fraser, Raphael André; Lipsitz, Stuart R; Sinha, Debajyoti; Fitzmaurice, Garrett M; Pan, Yi
2016-12-01
The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling, and weighting. In this article, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS)'based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. © 2016, The International Biometric Society.
Approximate Median Regression for Complex Survey Data with Skewed Response
Fraser, Raphael André; Lipsitz, Stuart R.; Sinha, Debajyoti; Fitzmaurice, Garrett M.; Pan, Yi
2016-01-01
Summary The ready availability of public-use data from various large national complex surveys has immense potential for the assessment of population characteristics using regression models. Complex surveys can be used to identify risk factors for important diseases such as cancer. Existing statistical methods based on estimating equations and/or utilizing resampling methods are often not valid with survey data due to complex survey design features. That is, stratification, multistage sampling and weighting. In this paper, we accommodate these design features in the analysis of highly skewed response variables arising from large complex surveys. Specifically, we propose a double-transform-both-sides (DTBS) based estimating equations approach to estimate the median regression parameters of the highly skewed response; the DTBS approach applies the same Box-Cox type transformation twice to both the outcome and regression function. The usual sandwich variance estimate can be used in our approach, whereas a resampling approach would be needed for a pseudo-likelihood based on minimizing absolute deviations (MAD). Furthermore, the approach is relatively robust to the true underlying distribution, and has much smaller mean square error than a MAD approach. The method is motivated by an analysis of laboratory data on urinary iodine (UI) concentration from the National Health and Nutrition Examination Survey. PMID:27062562
2010-10-01
bodies becomes greater as surface as- perities wear down (Hutchings, 1992). We characterize friction damage by a change in the friction coefficient...points are such a set, and satisfy an additional constraint in which the skew ( third moment) is minimized, which reduces the average error for a...On sequential Monte Carlo sampling methods for Bayesian filtering. Statistics and Computing, 10, 197–208. Hutchings, I. M. (1992). Tribology : friction
Errors and optics study of a permanent magnet quadrupole system
NASA Astrophysics Data System (ADS)
Schillaci, F.; Maggiore, M.; Rifuggiato, D.; Cirrone, G. A. P.; Cuttone, G.; Giove, D.
2015-05-01
Laser-based accelerators are gaining interest in recent years as an alternative to conventional machines [1]. Nowadays, energy and angular spread of the laser-driven beams are the main issues in application and different solutions for dedicated beam-transport lines have been proposed [2,3]. In this context a system of permanent magnet quadrupoles (PMQs) is going to be realized by INFN [2] researchers, in collaboration with SIGMAPHI [3] company in France, to be used as a collection and pre-selection system for laser driven proton beams. The definition of well specified characteristics, both in terms of performances and field quality, of the magnetic lenses is crucial for the system realization, for an accurate study of the beam dynamics and the proper matching with a magnetic selection system already realized [6,7]. Hence, different series of simulations have been used for studying the PMQs harmonic contents and stating the mechanical and magnetic tolerances in order to have reasonable good beam quality downstream the system. In this paper is reported the method used for the analysis of the PMQs errors and its validation. Also a preliminary optics characterization is presented in which are compared the effects of an ideal PMQs system with a perturbed system on a monochromatic proton beams.
A map of the cosmic background radiation at 3 millimeters
NASA Technical Reports Server (NTRS)
Lubin, P.; Villela, T.; Epstein, G.; Smoot, G.
1985-01-01
Data from a series of balloon flights covering both the Northern and Southern Hemispheres, measuring the large angular scale anisotropy in the cosmic background radiation at 3.3 mm wavelength are presented. The data cover 85 percent of the sky to a limiting sensitivity of 0.7 mK per 7 deg field of view. The data show a 50-sigma (statistical error only) dipole anisotropy with an amplitude of 3.44 + or - 0.17 mK and a direction of alpha = 11.2 h + or - 0.1 h, and delta = -6.0 deg + or - 1.5 deg. A 90 percent confidence level upper limit of 0.00007 is obtained for the rms quadrupole amplitude. Flights separated by 6 months show the motion of earth around the sun. Galactic contamination is very small, with less than 0.1 mK contribution to the dipole quadrupole terms. A map of the sky has been generated from the data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belov, Mikhail E.; Anderson, Gordon A.; Smith, Richard D.
Data-dependent selective external ion ejection with improved resolution is demonstrated with a 3.5 tesla FTICR instrument employing DREAMS (Dynamic Range Enhancement Applied to Mass Spectrometry) technology. To correct for the fringing rf-field aberrations each rod of the selection quadrupole has been segmented into three sections, so that ion excitation and ejection was performed by applying auxiliary rf-only waveforms in the region of the middle segments. Two different modes of external ion trapping and ejection were studied with the mixtures of model peptides and a tryptic digest of bovine serum albumin. A mass resolution of about 100 has been attained formore » rf-only dipolar ejection in a quadrupole operating at a Mathieu parameter q of{approx} 0.45. LC-ESI-DREAMS-FTICR analysis of a 0.1 mg/mL solution of bovine serum albumin digest resulted in detection of 82 unique tryptic peptides with mass measurement errors lower than 5 ppm, providing 100% sequence coverage of the protein.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Belov, Mikhail E.; Anderson, Gordon A.; Smith, Richard D.
Data-dependent selective external ion ejection with improved resolution is demonstrated with a 3.5 tesla FTICR instrument employing DREAMS (Dynamic Range Enhancement Applied to Mass Spectrometry) technology. To correct for the fringing rf-field aberrations each rod of the selection quadrupole has been segmented into three sections, so that ion excitation and ejection was performed by applying auxiliary rf-only waveforms in the region of the middle segments. Two different modes of external ion trapping and ejection were studied with the mixtures of model peptides and a tryptic digest of bovine serum albumin. A mass resolution of about 100 had been attained formore » rf-only dipolar ejection in a quadrupole operating at a Mathieu parameter q of ~0.45. LC-ESI-DREAMS-FTICR analysis of a 0.1 mg/mL solution of bovine serum albumin digest resulted in detection of 82 unique tryptic peptides with mass measurement errors lower than 5 ppm, providing 100 % sequence coverage of the protein.« less
Zhang, Shuai; Li, PeiPei; Yan, Zhongyong; Long, Ju; Zhang, Xiaojun
2017-03-01
An ultraperformance liquid chromatography-quadrupole time-of-flight high-resolution mass spectrometry method was developed and validated for the determination of nitrofurazone metabolites. Precolumn derivatization with 2,4-dinitrophenylhydrazine and p-dimethylaminobenzaldehyde as an internal standard was used successfully to determine the biomarker 5-nitro-2-furaldehyde. In negative electrospray ionization mode, the precise molecular weights of the derivatives were 320.0372 for the biomarker and 328.1060 for the internal standard (relative error 1.08 ppm). The matrix effect was evaluated and the analytical characteristics of the method and derivatization reaction conditions were validated. For comparison purposes, spiked samples were tested by both internal and external standard methods. The results show high precision can be obtained with p-dimethylaminobenzaldehyde as an internal standard for the identification and quantification of nitrofurazone metabolites in complex biological samples. Graphical Abstract A simplified preparation strategy for biological samples.
Field Quality from Tolerance Stack-up In R&D Quadrupoles for the Advanced Photon Source Upgrade
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, J.; Jaski, M.; Dejus, R.
2016-10-01
The Advanced Photon Source (APS) at Argonne National Laboratory (ANL) is considering upgrading the current double-bend, 7-GeV, 3rd generation storage ring to a 6-GeV, 4th generation storage ring with a Multibend Achromat (MBA) lattice. In this study, a novel method is proposed to determine fabrication and assembly tolerances through a combination of magnetic and mechanical tolerance analyses. Mechanical tolerance stackup analyses using Teamcenter Variation Analysis are carried out to determine the part and assembly level fabrication tolerances. Finite element analyses using OPERA are conducted to estimate the effect of fabrication and assembly errors on the magnetic field of a quadrupolemore » magnet and to determine the allowable tolerances to achieve the desired magnetic performance. Finally, results of measurements in R&D quadrupole prototypes are compared with the analysis results.« less
Qi, Yulin; Geib, Timon; Schorr, Pascal; Meier, Florian; Volmer, Dietrich A
2015-01-15
Isobaric interferences in human serum can potentially influence the measured concentration levels of 25-hydroxyvitamin D [25(OH)D], when low resolving power liquid chromatography/tandem mass spectrometry (LC/MS/MS) instruments and non-specific MS/MS product ions are employed for analysis. In this study, we provide a detailed characterization of these interferences and a technical solution to reduce the associated systematic errors. Detailed electrospray ionization Fourier transform ion cyclotron resonance (FTICR) high-resolution mass spectrometry (HRMS) experiments were used to characterize co-extracted isobaric components of 25(OH)D from human serum. Differential ion mobility spectrometry (DMS), as a gas-phase ion filter, was implemented on a triple quadrupole mass spectrometer for separation of the isobars. HRMS revealed the presence of multiple isobaric compounds in extracts of human serum for different sample preparation methods. Several of these isobars had the potential to increase the peak areas measured for 25(OH)D on low-resolution MS instruments. A major isobaric component was identified as pentaerythritol oleate, a technical lubricant, which was probably an artifact from the analytical instrumentation. DMS was able to remove several of these isobars prior to MS/MS, when implemented on the low-resolution triple quadrupole mass spectrometer. It was shown in this proof-of-concept study that DMS-MS has the potential to significantly decrease systematic errors, and thus improve accuracy of vitamin D measurements using LC/MS/MS. Copyright © 2014 John Wiley & Sons, Ltd.
Replication-associated mutational asymmetry in the human genome.
Chen, Chun-Long; Duquenne, Lauranne; Audit, Benjamin; Guilbaud, Guillaume; Rappailles, Aurélien; Baker, Antoine; Huvet, Maxime; d'Aubenton-Carafa, Yves; Hyrien, Olivier; Arneodo, Alain; Thermes, Claude
2011-08-01
During evolution, mutations occur at rates that can differ between the two DNA strands. In the human genome, nucleotide substitutions occur at different rates on the transcribed and non-transcribed strands that may result from transcription-coupled repair. These mutational asymmetries generate transcription-associated compositional skews. To date, the existence of such asymmetries associated with replication has not yet been established. Here, we compute the nucleotide substitution matrices around replication initiation zones identified as sharp peaks in replication timing profiles and associated with abrupt jumps in the compositional skew profile. We show that the substitution matrices computed in these regions fully explain the jumps in the compositional skew profile when crossing initiation zones. In intergenic regions, we observe mutational asymmetries measured as differences between complementary substitution rates; their sign changes when crossing initiation zones. These mutational asymmetries are unlikely to result from cryptic transcription but can be explained by a model based on replication errors and strand-biased repair. In transcribed regions, mutational asymmetries associated with replication superimpose on the previously described mutational asymmetries associated with transcription. We separate the substitution asymmetries associated with both mechanisms, which allows us to determine for the first time in eukaryotes, the mutational asymmetries associated with replication and to reevaluate those associated with transcription. Replication-associated mutational asymmetry may result from unequal rates of complementary base misincorporation by the DNA polymerases coupled with DNA mismatch repair (MMR) acting with different efficiencies on the leading and lagging strands. Replication, acting in germ line cells during long evolutionary times, contributed equally with transcription to produce the present abrupt jumps in the compositional skew. These results demonstrate that DNA replication is one of the major processes that shape human genome composition.
A Test Strategy for High Resolution Image Scanners.
1983-10-01
for multivariate analysis. Holt, Richart and Winston, Inc., New York. Graybill , F.A., 1961: An introduction to linear statistical models . SVolume I...i , j i -(7) 02 1 )2 y 4n .i ij 13 The linear estimation model for the polynomial coefficients can be set up as - =; =(8) with T = ( x’ . . X-nn "X...Resolution Image Scanner MTF Geometrical and radiometric performance Dynamic range, linearity , noise - Dynamic scanning errors Response uniformity Skewness of
Water quality management using statistical analysis and time-series prediction model
NASA Astrophysics Data System (ADS)
Parmar, Kulwinder Singh; Bhardwaj, Rashmi
2014-12-01
This paper deals with water quality management using statistical analysis and time-series prediction model. The monthly variation of water quality standards has been used to compare statistical mean, median, mode, standard deviation, kurtosis, skewness, coefficient of variation at Yamuna River. Model validated using R-squared, root mean square error, mean absolute percentage error, maximum absolute percentage error, mean absolute error, maximum absolute error, normalized Bayesian information criterion, Ljung-Box analysis, predicted value and confidence limits. Using auto regressive integrated moving average model, future water quality parameters values have been estimated. It is observed that predictive model is useful at 95 % confidence limits and curve is platykurtic for potential of hydrogen (pH), free ammonia, total Kjeldahl nitrogen, dissolved oxygen, water temperature (WT); leptokurtic for chemical oxygen demand, biochemical oxygen demand. Also, it is observed that predicted series is close to the original series which provides a perfect fit. All parameters except pH and WT cross the prescribed limits of the World Health Organization /United States Environmental Protection Agency, and thus water is not fit for drinking, agriculture and industrial use.
Solar-System Tests of Gravitational Theories
NASA Technical Reports Server (NTRS)
Shapiro, Irwin I.
2005-01-01
We are engaged in testing gravitational theory, mainly using observations of objects in the solar system and mainly on the interplanetary scale. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to support this model by placing tighter bounds on any departure from it. For this project, we have analyzed a combination of observational data with our model of the solar system, including planetary radar ranging, lunar laser ranging, and spacecraft tracking, as well as pulsar timing and pulsar VLBI measurements. In the past year, we have added to our data, primarily lunar laser ranging measurements, but also supplementary data concerning the physical properties of solar-system objects, such as the solar quadrupole moment, planetary masses, and asteroid radii. Because the solar quadrupole moment contributes to the classical precession of planetary perihelia, but with a dependence on distance from the Sun that differs from that of the relativistic precession, it is possible to estimate effects simultaneously. However, our interest is mainly in the relativistic effect, and we find that imposing a constraint on the quadrupole moment from helioseismology studies, gives us a dramatic (about ten-fold) decrease in the standard error of our estimate of the relativistic component of the perihelion advance.
Emmetropisation and the aetiology of refractive errors
Flitcroft, D I
2014-01-01
The distribution of human refractive errors displays features that are not commonly seen in other biological variables. Compared with the more typical Gaussian distribution, adult refraction within a population typically has a negative skew and increased kurtosis (ie is leptokurtotic). This distribution arises from two apparently conflicting tendencies, first, the existence of a mechanism to control eye growth during infancy so as to bring refraction towards emmetropia/low hyperopia (ie emmetropisation) and second, the tendency of many human populations to develop myopia during later childhood and into adulthood. The distribution of refraction therefore changes significantly with age. Analysis of the processes involved in shaping refractive development allows for the creation of a life course model of refractive development. Monte Carlo simulations based on such a model can recreate the variation of refractive distributions seen from birth to adulthood and the impact of increasing myopia prevalence on refractive error distributions in Asia. PMID:24406411
Candela, L.; Olea, R.A.; Custodio, E.
1988-01-01
Groundwater quality observation networks are examples of discontinuous sampling on variables presenting spatial continuity and highly skewed frequency distributions. Anywhere in the aquifer, lognormal kriging provides estimates of the variable being sampled and a standard error of the estimate. The average and the maximum standard error within the network can be used to dynamically improve the network sampling efficiency or find a design able to assure a given reliability level. The approach does not require the formulation of any physical model for the aquifer or any actual sampling of hypothetical configurations. A case study is presented using the network monitoring salty water intrusion into the Llobregat delta confined aquifer, Barcelona, Spain. The variable chloride concentration used to trace the intrusion exhibits sudden changes within short distances which make the standard error fairly invariable to changes in sampling pattern and to substantial fluctuations in the number of wells. ?? 1988.
Distribution of distances between DNA barcode labels in nanochannels close to the persistence length
NASA Astrophysics Data System (ADS)
Reinhart, Wesley F.; Reifenberger, Jeff G.; Gupta, Damini; Muralidhar, Abhiram; Sheats, Julian; Cao, Han; Dorfman, Kevin D.
2015-02-01
We obtained experimental extension data for barcoded E. coli genomic DNA molecules confined in nanochannels from 40 nm to 51 nm in width. The resulting data set consists of 1 627 779 measurements of the distance between fluorescent probes on 25 407 individual molecules. The probability density for the extension between labels is negatively skewed, and the magnitude of the skewness is relatively insensitive to the distance between labels. The two Odijk theories for DNA confinement bracket the mean extension and its variance, consistent with the scaling arguments underlying the theories. We also find that a harmonic approximation to the free energy, obtained directly from the probability density for the distance between barcode labels, leads to substantial quantitative error in the variance of the extension data. These results suggest that a theory for DNA confinement in such channels must account for the anharmonic nature of the free energy as a function of chain extension.
Extremal optimization for Sherrington-Kirkpatrick spin glasses
NASA Astrophysics Data System (ADS)
Boettcher, S.
2005-08-01
Extremal Optimization (EO), a new local search heuristic, is used to approximate ground states of the mean-field spin glass model introduced by Sherrington and Kirkpatrick. The implementation extends the applicability of EO to systems with highly connected variables. Approximate ground states of sufficient accuracy and with statistical significance are obtained for systems with more than N=1000 variables using ±J bonds. The data reproduces the well-known Parisi solution for the average ground state energy of the model to about 0.01%, providing a high degree of confidence in the heuristic. The results support to less than 1% accuracy rational values of ω=2/3 for the finite-size correction exponent, and of ρ=3/4 for the fluctuation exponent of the ground state energies, neither one of which has been obtained analytically yet. The probability density function for ground state energies is highly skewed and identical within numerical error to the one found for Gaussian bonds. But comparison with infinite-range models of finite connectivity shows that the skewness is connectivity-dependent.
Robustness of S1 statistic with Hodges-Lehmann for skewed distributions
NASA Astrophysics Data System (ADS)
Ahad, Nor Aishah; Yahaya, Sharipah Soaad Syed; Yin, Lee Ping
2016-10-01
Analysis of variance (ANOVA) is a common use parametric method to test the differences in means for more than two groups when the populations are normally distributed. ANOVA is highly inefficient under the influence of non- normal and heteroscedastic settings. When the assumptions are violated, researchers are looking for alternative such as Kruskal-Wallis under nonparametric or robust method. This study focused on flexible method, S1 statistic for comparing groups using median as the location estimator. S1 statistic was modified by substituting the median with Hodges-Lehmann and the default scale estimator with the variance of Hodges-Lehmann and MADn to produce two different test statistics for comparing groups. Bootstrap method was used for testing the hypotheses since the sampling distributions of these modified S1 statistics are unknown. The performance of the proposed statistic in terms of Type I error was measured and compared against the original S1 statistic, ANOVA and Kruskal-Wallis. The propose procedures show improvement compared to the original statistic especially under extremely skewed distribution.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Qin, Hong; Davidson, Ronald C.; Burby, Joshua W.
2014-04-08
The dynamics of charged particles in general linear focusing lattices with quadrupole, skew-quadrupole, dipole, and solenoidal components, as well as torsion of the fiducial orbit and variation of beam energy is parametrized using a generalized Courant-Snyder (CS) theory, which extends the original CS theory for one degree of freedom to higher dimensions. The envelope function is generalized into an envelope matrix, and the phase advance is generalized into a 4D symplectic rotation, or a Uð2Þ element. The 1D envelope equation, also known as the Ermakov-Milne-Pinney equation in quantum mechanics, is generalized to an envelope matrix equation in higher dimensions. Othermore » components of the original CS theory, such as the transfer matrix, Twiss functions, and CS invariant (also known as the Lewis invariant) all have their counterparts, with remarkably similar expressions, in the generalized theory. The gauge group structure of the generalized theory is analyzed. By fixing the gauge freedom with a desired symmetry, the generalized CS parametrization assumes the form of the modified Iwasawa decomposition, whose importance in phase space optics and phase space quantum mechanics has been recently realized. This gauge fixing also symmetrizes the generalized envelope equation and expresses the theory using only the generalized Twiss function β. The generalized phase advance completely determines the spectral and structural stability properties of a general focusing lattice. For structural stability, the generalized CS theory enables application of the Krein-Moser theory to greatly simplify the stability analysis. The generalized CS theory provides an effective tool to study coupled dynamics and to discover more optimized lattice designs in the larger parameter space of general focusing lattices.« less
Inferring climate variability from skewed proxy records
NASA Astrophysics Data System (ADS)
Emile-Geay, J.; Tingley, M.
2013-12-01
Many paleoclimate analyses assume a linear relationship between the proxy and the target climate variable, and that both the climate quantity and the errors follow normal distributions. An ever-increasing number of proxy records, however, are better modeled using distributions that are heavy-tailed, skewed, or otherwise non-normal, on account of the proxies reflecting non-normally distributed climate variables, or having non-linear relationships with a normally distributed climate variable. The analysis of such proxies requires a different set of tools, and this work serves as a cautionary tale on the danger of making conclusions about the underlying climate from applications of classic statistical procedures to heavily skewed proxy records. Inspired by runoff proxies, we consider an idealized proxy characterized by a nonlinear, thresholded relationship with climate, and describe three approaches to using such a record to infer past climate: (i) applying standard methods commonly used in the paleoclimate literature, without considering the non-linearities inherent to the proxy record; (ii) applying a power transform prior to using these standard methods; (iii) constructing a Bayesian model to invert the mechanistic relationship between the climate and the proxy. We find that neglecting the skewness in the proxy leads to erroneous conclusions and often exaggerates changes in climate variability between different time intervals. In contrast, an explicit treatment of the skewness, using either power transforms or a Bayesian inversion of the mechanistic model for the proxy, yields significantly better estimates of past climate variations. We apply these insights in two paleoclimate settings: (1) a classical sedimentary record from Laguna Pallcacocha, Ecuador (Moy et al., 2002). Our results agree with the qualitative aspects of previous analyses of this record, but quantitative departures are evident and hold implications for how such records are interpreted, and compared to other proxy records. (2) a multiproxy reconstruction of temperature over the Common Era (Mann et al., 2009), where we find that about one third of the records display significant departures from normality. Accordingly, accounting for skewness in proxy predictors has a notable influence on both reconstructed global mean and spatial patterns of temperature change. Inferring climate variability from skewed proxy records thus requires cares, but can be done with relatively simple tools. References - Mann, M. E., Z. Zhang, S. Rutherford, R. S. Bradley, M. K. Hughes, D. Shindell, C. Ammann, G. Faluvegi, and F. Ni (2009), Global signatures and dynamical origins of the little ice age and medieval climate anomaly, Science, 326(5957), 1256-1260, doi:10.1126/science.1177303. - Moy, C., G. Seltzer, D. Rodbell, and D. Anderson (2002), Variability of El Niño/Southern Oscillation activ- ity at millennial timescales during the Holocene epoch, Nature, 420(6912), 162-165.
Algae Tile Data: 2004-2007, BPA-51; Preliminary Report, October 28, 2008.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Holderman, Charles
Multiple files containing 2004 through 2007 Tile Chlorophyll data for the Kootenai River sites designated as: KR1, KR2, KR3, KR4 (Downriver) and KR6, KR7, KR9, KR9.1, KR10, KR11, KR12, KR13, KR14 (Upriver) were received by SCS. For a complete description of the sites covered, please refer to http://ktoi.scsnetw.com. To maintain consistency with the previous SCS algae reports, all analyses were carried out separately for the Upriver and Downriver categories, as defined in the aforementioned paragraph. The Upriver designation, however, now includes three additional sites, KR11, KR12, and the nutrient addition site, KR9.1. Summary statistics and information on the four responses,more » chlorophyll a, chlorophyll a Accrual Rate, Total Chlorophyll, and Total Chlorophyll Accrual Rate are presented in Print Out 2. Computations were carried out separately for each river position (Upriver and Downriver) and year. For example, the Downriver position in 2004 showed an average Chlorophyll a level of 25.5 mg with a standard deviation of 21.4 and minimum and maximum values of 3.1 and 196 mg, respectively. The Upriver data in 2004 showed a lower overall average chlorophyll a level at 2.23 mg with a lower standard deviation (3.6) and minimum and maximum values of (0.13 and 28.7, respectively). A more comprehensive summary of each variable and position is given in Print Out 3. This lists the information above as well as other summary information such as the variance, standard error, various percentiles and extreme values. Using the 2004 Downriver Chlorophyll a as an example again, the variance of this data was 459.3 and the standard error of the mean was 1.55. The median value or 50th percentile was 21.3, meaning 50% of the data fell above and below this value. It should be noted that this value is somewhat different than the mean of 25.5. This is an indication that the frequency distribution of the data is not symmetrical (skewed). The skewness statistic, listed as part of the first section of each analysis, quantifies this. In a symmetric distribution, such as a Normal distribution, the skewness value would be 0. The tile chlorophyll data, however, shows larger values. Chlorophyll a, in the 2004 Downriver example, has a skewness statistic of 3.54, which is quite high. In the last section of the summary analysis, the stem and leaf plot graphically demonstrates the asymmetry, showing most of the data centered around 25 with a large value at 196. The final plot is referred to as a normal probability plot and graphically compares the data to a theoretical normal distribution. For chlorophyll a, the data (asterisks) deviate substantially from the theoretical normal distribution (diagonal reference line of pluses), indicating that the data is non-normal. Other response variables in both the Downriver and Upriver categories also indicated skewed distributions. Because the sample size and mean comparison procedures below require symmetrical, normally distributed data, each response in the data set was logarithmically transformed. The logarithmic transformation, in this case, can help mitigate skewness problems. The summary statistics for the four transformed responses (log-ChlorA, log-TotChlor, and log-accrual ) are given in Print Out 4. For the 2004 Downriver Chlorophyll a data, the logarithmic transformation reduced the skewness value to -0.36 and produced a more bell-shaped symmetric frequency distribution. Similar improvements are shown for the remaining variables and river categories. Hence, all subsequent analyses given below are based on logarithmic transformations of the original responses.« less
Scheinker, Alexander; Huang, Xiaobiao; Wu, Juhao
2017-02-20
Here, we report on a beam-based experiment performed at the SPEAR3 storage ring of the Stanford Synchrotron Radiation Lightsource at the SLAC National Accelerator Laboratory, in which a model-independent extremum-seeking optimization algorithm was utilized to minimize betatron oscillations in the presence of a time-varying kicker magnetic field, by automatically tuning the pulsewidth, voltage, and delay of two other kicker magnets, and the current of two skew quadrupole magnets, simultaneously, in order to optimize injection kick matching. Adaptive tuning was performed on eight parameters simultaneously. The scheme was able to continuously maintain the match of a five-magnet lattice while the fieldmore » strength of a kicker magnet was continuously varied at a rate much higher (±6% sinusoidal voltage change over 1.5 h) than typically experienced in operation. Lastly, the ability to quickly tune or compensate for time variation of coupled components, as demonstrated here, is very important for the more general, more difficult problem of global accelerator tuning to quickly switch between various experimental setups.« less
Tips and Tricks for Successful Application of Statistical Methods to Biological Data.
Schlenker, Evelyn
2016-01-01
This chapter discusses experimental design and use of statistics to describe characteristics of data (descriptive statistics) and inferential statistics that test the hypothesis posed by the investigator. Inferential statistics, based on probability distributions, depend upon the type and distribution of the data. For data that are continuous, randomly and independently selected, as well as normally distributed more powerful parametric tests such as Student's t test and analysis of variance (ANOVA) can be used. For non-normally distributed or skewed data, transformation of the data (using logarithms) may normalize the data allowing use of parametric tests. Alternatively, with skewed data nonparametric tests can be utilized, some of which rely on data that are ranked prior to statistical analysis. Experimental designs and analyses need to balance between committing type 1 errors (false positives) and type 2 errors (false negatives). For a variety of clinical studies that determine risk or benefit, relative risk ratios (random clinical trials and cohort studies) or odds ratios (case-control studies) are utilized. Although both use 2 × 2 tables, their premise and calculations differ. Finally, special statistical methods are applied to microarray and proteomics data, since the large number of genes or proteins evaluated increase the likelihood of false discoveries. Additional studies in separate samples are used to verify microarray and proteomic data. Examples in this chapter and references are available to help continued investigation of experimental designs and appropriate data analysis.
NASA Astrophysics Data System (ADS)
Ma, Wei; Lu, Liang; Xu, Xianbo; Sun, Liepeng; Zhang, Zhouli; Dou, Weiping; Li, Chenxing; Shi, Longbo; He, Yuan; Zhao, Hongwei
2017-03-01
An 81.25 MHz continuous wave (CW) radio frequency quadrupole (RFQ) accelerator has been designed for the Low Energy Accelerator Facility (LEAF) at the Institute of Modern Physics (IMP) of the Chinese Academy of Science (CAS). In the CW operating mode, the proposed RFQ design adopted the conventional four-vane structure. The main design goals are providing high shunt impendence with low power losses. In the electromagnetic (EM) design, the π-mode stabilizing loops (PISLs) were optimized to produce a good mode separation. The tuners were also designed and optimized to tune the frequency and field flatness of the operating mode. The vane undercuts were optimized to provide a flat field along the RFQ cavity. Additionally, a full length model with modulations was set up for the final EM simulations. Following the EM design, thermal analysis of the structure was carried out. In this paper, detailed EM design and thermal simulations of the LEAF-RFQ will be presented and discussed. Structure error analysis was also studied.
Read disturb errors in a CMOS static RAM chip. [radiation hardened for spacedraft
NASA Technical Reports Server (NTRS)
Wood, Steven H.; Marr, James C., IV; Nguyen, Tien T.; Padgett, Dwayne J.; Tran, Joe C.; Griswold, Thomas W.; Lebowitz, Daniel C.
1989-01-01
Results are reported from an extensive investigation into pattern-sensitive soft errors (read disturb errors) in the TCC244 CMOS static RAM chip. The TCC244, also known as the SA2838, is a radiation-hard single-event-upset-resistant 4 x 256 memory chip. This device is being used by the Jet Propulsion Laboratory in the Galileo and Magellan spacecraft, which will have encounters with Jupiter and Venus, respectively. Two aspects of the part's design are shown to result in the occurrence of read disturb errors: the transparence of the signal path from the address pins to the array of cells, and the large resistance in the Vdd and Vss lines of the cells in the center of the array. Probe measurements taken during a read disturb failure illustrate how address skews and the data pattern in the chip combine to produce a bit flip. A capacitive charge pump formed by the individual cell capacitances and the resistance in the supply lines pumps down both the internal cell voltage and the local supply voltage until a bit flip occurs.
Minimizing finite-volume discretization errors on polyhedral meshes
NASA Astrophysics Data System (ADS)
Mouly, Quentin; Evrard, Fabien; van Wachem, Berend; Denner, Fabian
2017-11-01
Tetrahedral meshes are widely used in CFD to simulate flows in and around complex geometries, as automatic generation tools now allow tetrahedral meshes to represent arbitrary domains in a relatively accessible manner. Polyhedral meshes, however, are an increasingly popular alternative. While tetrahedron have at most four neighbours, the higher number of neighbours per polyhedral cell leads to a more accurate evaluation of gradients, essential for the numerical resolution of PDEs. The use of polyhedral meshes, nonetheless, introduces discretization errors for finite-volume methods: skewness and non-orthogonality, which occur with all sorts of unstructured meshes, as well as errors due to non-planar faces, specific to polygonal faces with more than three vertices. Indeed, polyhedral mesh generation algorithms cannot, in general, guarantee to produce meshes free of non-planar faces. The presented work focuses on the quantification and optimization of discretization errors on polyhedral meshes in the context of finite-volume methods. A quasi-Newton method is employed to optimize the relevant mesh quality measures. Various meshes are optimized and CFD results of cases with known solutions are presented to assess the improvements the optimization approach can provide.
Adaptation to Skew Distortions of Natural Scenes and Retinal Specificity of Its Aftereffects
Habtegiorgis, Selam W.; Rifai, Katharina; Lappe, Markus; Wahl, Siegfried
2017-01-01
Image skew is one of the prominent distortions that exist in optical elements, such as in spectacle lenses. The present study evaluates adaptation to image skew in dynamic natural images. Moreover, the cortical levels involved in skew coding were probed using retinal specificity of skew adaptation aftereffects. Left and right skewed natural image sequences were shown to observers as adapting stimuli. The point of subjective equality (PSE), i.e., the skew amplitude in simple geometrical patterns that is perceived to be unskewed, was used to quantify the aftereffect of each adapting skew direction. The PSE, in a two-alternative forced choice paradigm, shifted toward the adapting skew direction. Moreover, significant adaptation aftereffects were obtained not only at adapted, but also at non-adapted retinal locations during fixation. Skew adaptation information was transferred partially to non-adapted retinal locations. Thus, adaptation to skewed natural scenes induces coordinated plasticity in lower and higher cortical areas of the visual pathway. PMID:28751870
Oberg, Kevin A.; Mades, Dean M.
1987-01-01
Four techniques for estimating generalized skew in Illinois were evaluated: (1) a generalized skew map of the US; (2) an isoline map; (3) a prediction equation; and (4) a regional-mean skew. Peak-flow records at 730 gaging stations having 10 or more annual peaks were selected for computing station skews. Station skew values ranged from -3.55 to 2.95, with a mean of -0.11. Frequency curves computed for 30 gaging stations in Illinois using the variations of the regional-mean skew technique are similar to frequency curves computed using a skew map developed by the US Water Resources Council (WRC). Estimates of the 50-, 100-, and 500-yr floods computed for 29 of these gaging stations using the regional-mean skew techniques are within the 50% confidence limits of frequency curves computed using the WRC skew map. Although the three variations of the regional-mean skew technique were slightly more accurate than the WRC map, there is no appreciable difference between flood estimates computed using the variations of the regional-mean technique and flood estimates computed using the WRC skew map. (Peters-PTT)
The Affective Impact of Financial Skewness on Neural Activity and Choice
Wu, Charlene C.; Bossaerts, Peter; Knutson, Brian
2011-01-01
Few finance theories consider the influence of “skewness” (or large and asymmetric but unlikely outcomes) on financial choice. We investigated the impact of skewed gambles on subjects' neural activity, self-reported affective responses, and subsequent preferences using functional magnetic resonance imaging (FMRI). Neurally, skewed gambles elicited more anterior insula activation than symmetric gambles equated for expected value and variance, and positively skewed gambles also specifically elicited more nucleus accumbens (NAcc) activation than negatively skewed gambles. Affectively, positively skewed gambles elicited more positive arousal and negatively skewed gambles elicited more negative arousal than symmetric gambles equated for expected value and variance. Subjects also preferred positively skewed gambles more, but negatively skewed gambles less than symmetric gambles of equal expected value. Individual differences in both NAcc activity and positive arousal predicted preferences for positively skewed gambles. These findings support an anticipatory affect account in which statistical properties of gambles—including skewness—can influence neural activity, affective responses, and ultimately, choice. PMID:21347239
NASA Astrophysics Data System (ADS)
Blind, Barbara; Jason, Andrew J.
1997-05-01
We describe the new injection line to be implemented for the Los Alamos Proton Storage Ring in the change from a two-step process to direct H- injection. While obeying all geometrical constraints imposed by the existing structures, the new line has properties not found in the present injection line. In particular, it features decoupled transverse phase spaces downstream of the skew bend and a high degree of tunability of the beam at the injection foil. A comprehensive set of error studies has dictated the component tolerances imposed and has indicated the expected performance of the system.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schreck, S. J.; Schepers, J. G.
Continued inquiry into rotor and blade aerodynamics remains crucial for achieving accurate, reliable prediction of wind turbine power performance under yawed conditions. To exploit key advantages conferred by controlled inflow conditions, we used EU-JOULE DATA Project and UAE Phase VI experimental data to characterize rotor power production under yawed conditions. Anomalies in rotor power variation with yaw error were observed, and the underlying fluid dynamic interactions were isolated. Unlike currently recognized influences caused by angled inflow and skewed wake, which may be considered potential flow interactions, these anomalies were linked to pronounced viscous and unsteady effects.
Partial pressure analysis in space testing
NASA Technical Reports Server (NTRS)
Tilford, Charles R.
1994-01-01
For vacuum-system or test-article analysis it is often desirable to know the species and partial pressures of the vacuum gases. Residual gas or Partial Pressure Analyzers (PPA's) are commonly used for this purpose. These are mass spectrometer-type instruments, most commonly employing quadrupole filters. These instruments can be extremely useful, but they should be used with caution. Depending on the instrument design, calibration procedures, and conditions of use, measurements made with these instruments can be accurate to within a few percent, or in error by two or more orders of magnitude. Significant sources of error can include relative gas sensitivities that differ from handbook values by an order of magnitude, changes in sensitivity with pressure by as much as two orders of magnitude, changes in sensitivity with time after exposure to chemically active gases, and the dependence of the sensitivity for one gas on the pressures of other gases. However, for most instruments, these errors can be greatly reduced with proper operating procedures and conditions of use. In this paper, data are presented illustrating performance characteristics for different instruments and gases, operating parameters are recommended to minimize some errors, and calibrations procedures are described that can detect and/or correct other errors.
Gietzelt, Matthias; Schnabel, Stephan; Wolf, Klaus-Hendrik; Büsching, Felix; Song, Bianying; Rust, Stefan; Marschollek, Michael
2012-05-01
One of the key problems in accelerometry based gait analyses is that it may not be possible to attach an accelerometer to the lower trunk so that its axes are perfectly aligned to the axes of the subject. In this paper we will present an algorithm that was designed to virtually align the axes of the accelerometer to the axes of the subject during walking sections. This algorithm is based on a physically reasonable approach and built for measurements in unsupervised settings, where the test persons are applying the sensors by themselves. For evaluation purposes we conducted a study with 6 healthy subjects and measured their gait with a manually aligned and a skewed accelerometer attached to the subject's lower trunk. After applying the algorithm the intra-axis correlation of both sensors was on average 0.89±0.1 with a mean absolute error of 0.05g. We concluded that the algorithm was able to adjust the skewed sensor node virtually to the coordinate system of the subject. Copyright © 2011 Elsevier Ireland Ltd. All rights reserved.
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
DOT National Transportation Integrated Search
2017-08-01
Skewed bridges in Kansas are often designed such that the cross-frames are carried parallel to the skew angle up to 40, while many other states place cross-frames perpendicular to the girder for skew angles greater than 20. Skewed-parallel cross-...
Portfolio optimization with skewness and kurtosis
NASA Astrophysics Data System (ADS)
Lam, Weng Hoe; Jaaman, Saiful Hafizah Hj.; Isa, Zaidi
2013-04-01
Mean and variance of return distributions are two important parameters of the mean-variance model in portfolio optimization. However, the mean-variance model will become inadequate if the returns of assets are not normally distributed. Therefore, higher moments such as skewness and kurtosis cannot be ignored. Risk averse investors prefer portfolios with high skewness and low kurtosis so that the probability of getting negative rates of return will be reduced. The objective of this study is to compare the portfolio compositions as well as performances between the mean-variance model and mean-variance-skewness-kurtosis model by using the polynomial goal programming approach. The results show that the incorporation of skewness and kurtosis will change the optimal portfolio compositions. The mean-variance-skewness-kurtosis model outperforms the mean-variance model because the mean-variance-skewness-kurtosis model takes skewness and kurtosis into consideration. Therefore, the mean-variance-skewness-kurtosis model is more appropriate for the investors of Malaysia in portfolio optimization.
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
Fitting Photometry of Blended Microlensing Events
NASA Astrophysics Data System (ADS)
Thomas, Christian L.; Griest, Kim
2006-03-01
We reexamine the usefulness of fitting blended light-curve models to microlensing photometric data. We find agreement with previous workers (e.g., Woźniak & Paczyński) that this is a difficult proposition because of the degeneracy of blend fraction with other fit parameters. We show that follow-up observations at specific point along the light curve (peak region and wings) of high-magnification events are the most helpful in removing degeneracies. We also show that very small errors in the baseline magnitude can result in problems in measuring the blend fraction and study the importance of non-Gaussian errors in the fit results. The biases and skewness in the distribution of the recovered blend fraction is discussed. We also find a new approximation formula relating the blend fraction and the unblended fit parameters to the underlying event duration needed to estimate microlensing optical depth.
Error reduction program: A progress report
NASA Technical Reports Server (NTRS)
Syed, S. A.
1984-01-01
Five finite differences schemes were evaluated for minimum numerical diffusion in an effort to identify and incorporate the best error reduction scheme into a 3D combustor performance code. Based on this evaluated, two finite volume method schemes were selected for further study. Both the quadratic upstream differencing scheme (QUDS) and the bounded skew upstream differencing scheme two (BSUDS2) were coded into a two dimensional computer code and their accuracy and stability determined by running several test cases. It was found that BSUDS2 was more stable than QUDS. It was also found that the accuracy of both schemes is dependent on the angle that the streamline make with the mesh with QUDS being more accurate at smaller angles and BSUDS2 more accurate at larger angles. The BSUDS2 scheme was selected for extension into three dimensions.
Increased skewing of X chromosome inactivation in Rett syndrome patients and their mothers.
Knudsen, Gun Peggy S; Neilson, Tracey C S; Pedersen, June; Kerr, Alison; Schwartz, Marianne; Hulten, Maj; Bailey, Mark E S; Orstavik, Karen Helene
2006-11-01
Rett syndrome is a largely sporadic, X-linked neurological disorder with a characteristic phenotype, but which exhibits substantial phenotypic variability. This variability has been partly attributed to an effect of X chromosome inactivation (XCI). There have been conflicting reports regarding incidence of skewed X inactivation in Rett syndrome. In rare familial cases of Rett syndrome, favourably skewed X inactivation has been found in phenotypically normal carrier mothers. We have investigated the X inactivation pattern in DNA from blood and buccal cells of sporadic Rett patients (n=96) and their mothers (n=84). The mean degree of skewing in blood was higher in patients (70.7%) than controls (64.9%). Unexpectedly, the mothers of these patients also had a higher mean degree of skewing in blood (70.8%) than controls. In accordance with these findings, the frequency of skewed (XCI > or =80%) X inactivation in blood was also higher in both patients (25%) and mothers (30%) than in controls (11%). To test whether the Rett patients with skewed X inactivation were daughters of skewed mothers, 49 mother-daughter pairs were analysed. Of 14 patients with skewed X inactivation, only three had a mother with skewed X inactivation. Among patients, mildly affected cases were shown to be more skewed than more severely affected cases, and there was a trend towards preferential inactivation of the paternally inherited X chromosome in skewed cases. These findings, particularly the greater degree of X inactivation skewing in Rett syndrome patients, are of potential significance in the analysis of genotype-phenotype correlations in Rett syndrome.
Sex differences in the drivers of reproductive skew in a cooperative breeder.
Nelson-Flower, Martha J; Flower, Tom P; Ridley, Amanda R
2018-04-16
Many cooperatively breeding societies are characterized by high reproductive skew, such that some socially dominant individuals breed, while socially subordinate individuals provide help. Inbreeding avoidance serves as a source of reproductive skew in many high-skew societies, but few empirical studies have examined sources of skew operating alongside inbreeding avoidance or compared individual attempts to reproduce (reproductive competition) with individual reproductive success. Here, we use long-term genetic and observational data to examine factors affecting reproductive skew in the high-skew cooperatively breeding southern pied babbler (Turdoides bicolor). When subordinates can breed, skew remains high, suggesting factors additional to inbreeding avoidance drive skew. Subordinate females are more likely to compete to breed when older or when ecological constraints on dispersal are high, but heavy subordinate females are more likely to successfully breed. Subordinate males are more likely to compete when they are older, during high ecological constraints, or when they are related to the dominant male, but only the presence of within-group unrelated subordinate females predicts subordinate male breeding success. Reproductive skew is not driven by reproductive effort, but by forces such as intrinsic physical limitations and intrasexual conflict (for females) or female mate choice, male mate-guarding and potentially reproductive restraint (for males). Ecological conditions or "outside options" affect the occurrence of reproductive conflict, supporting predictions of recent synthetic skew models. Inbreeding avoidance together with competition for access to reproduction may generate high skew in animal societies, and disparate processes may be operating to maintain male vs. female reproductive skew in the same species. © 2018 John Wiley & Sons Ltd.
Analytical N beam position monitor method
NASA Astrophysics Data System (ADS)
Wegscheider, A.; Langner, A.; Tomás, R.; Franchi, A.
2017-11-01
Measurement and correction of focusing errors is of great importance for performance and machine protection of circular accelerators. Furthermore LHC needs to provide equal luminosities to the experiments ATLAS and CMS. High demands are also set on the speed of the optics commissioning, as the foreseen operation with β*-leveling on luminosity will require many operational optics. A fast measurement of the β -function around a storage ring is usually done by using the measured phase advance between three consecutive beam position monitors (BPMs). A recent extension of this established technique, called the N-BPM method, was successfully applied for optics measurements at CERN, ALBA, and ESRF. We present here an improved algorithm that uses analytical calculations for both random and systematic errors and takes into account the presence of quadrupole, sextupole, and BPM misalignments, in addition to quadrupolar field errors. This new scheme, called the analytical N-BPM method, is much faster, further improves the measurement accuracy, and is applicable to very pushed beam optics where the existing numerical N-BPM method tends to fail.
Electron Beam Focusing in the Linear Accelerator (linac)
NASA Astrophysics Data System (ADS)
Jauregui, Luis
2015-10-01
To produce consistent data with an electron accelerator, it is critical to have a well-focused beam. To keep the beam focused, quadrupoles (quads) are employed. Quads are magnets, which focus the beam in one direction (x or y) and defocus in the other. When two or more quads are used in series, a net focusing effect is achieved in both vertical and horizontal directions. At start up there is a 5% calibration error in the linac at Thomas Jefferson National Accelerator Facility. This means that the momentum of particles passing through the quads isn't always what is expected, which affects the focusing of the beam. The objective is to find exactly how sensitive the focusing in the linac is to this 5% error. A linac was simulated, which contained 290 RF Cavities with random electric fields (to simulate the 5% calibration error), and a total momentum kick of 1090 MeV. National Science Foundation, Department of Energy, Jefferson Lab, Old Dominion University.
Hybrid excited claw pole generator with skewed and non-skewed permanent magnets
NASA Astrophysics Data System (ADS)
Wardach, Marcin
2017-12-01
This article contains simulation results of the Hybrid Excited Claw Pole Generator with skewed and non-skewed permanent magnets on rotor. The experimental machine has claw poles on two rotor sections, between which an excitation control coil is located. The novelty of this machine is existence of non-skewed permanent magnets on claws of one part of the rotor and skewed permanent magnets on the second one. The paper presents the construction of the machine and analysis of the influence of the PM skewing on the cogging torque and back-emf. Simulation studies enabled the determination of the cogging torque and the back-emf rms for both: the strengthening and the weakening of magnetic field. The influence of the magnets skewing on the cogging torque and the back-emf rms have also been analyzed.
Design of 3x3 Focusing Array for Heavy Ion Driver Final Report on CRADA TC-02082-04
DOE Office of Scientific and Technical Information (OSTI.GOV)
Martovetsky, N.
This memo presents a design of a 3x3 quadrupole array for HIF. It contains 3 D magnetic field computations of the array build with racetrack coils with and without different shields. It is shown that it is possible to have a low error magnetic field in the cells and shield the stray fields to acceptable levels. The array design seems to be a practical solution to any size array for future multi-beam heavy ion fusion drivers.
On the Effects of Wind Turbine Wake Skew Caused by Wind Veer: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, Matthew J; Sirnivas, Senu
Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less
On the Effects of Wind Turbine Wake Skew Caused by Wind Veer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Churchfield, Matthew J; Sirnivas, Senu
Because of Coriolis forces caused by the Earth's rotation, the structure of the atmospheric boundary layer often contains wind-direction change with height, also known as wind-direction veer. Under low turbulence conditions, such as in stably stratified atmospheric conditions, this veer can be significant, even across the vertical extent of a wind turbine's rotor disk. The veer then causes the wind turbine wake to skew as it advects downstream. This wake skew has been observed both experimentally and numerically. In this work, we attempt to examine the wake skewing process in some detail, and quantify how differently a skewed wake versusmore » a non skewed wake affects a downstream turbine. We do this by performing atmospheric large-eddy simulations to create turbulent inflow winds with and without veer. In the veer case, there is a roughly 8 degree wind direction change across the turbine rotor. We then perform subsequent large-eddy simulations using these inflow data with an actuator line rotor model to create wakes. The turbine modeled is a large, modern, offshore, multimegawatt turbine. We examine the unsteady wake data in detail and show that the skewed wake recovers faster than the non skewed wake. We also show that the wake deficit does not skew to the same degree that a passive tracer would if subject to veered inflow. Last, we use the wake data to place a hypothetical turbine 9 rotor diameters downstream by running aeroelastic simulations with the simulated wake data. We see differences in power and loads if this downstream turbine is subject to a skewed or non skewed wake. We feel that the differences observed between the skewed and nonskewed wake are important enough that the skewing effect should be included in engineering wake models.« less
New approaches to probing Minkowski functionals
NASA Astrophysics Data System (ADS)
Munshi, D.; Smidt, J.; Cooray, A.; Renzi, A.; Heavens, A.; Coles, P.
2013-10-01
We generalize the concept of the ordinary skew-spectrum to probe the effect of non-Gaussianity on the morphology of cosmic microwave background (CMB) maps in several domains: in real space (where they are commonly known as cumulant-correlators), and in harmonic and needlet bases. The essential aim is to retain more information than normally contained in these statistics, in order to assist in determining the source of any measured non-Gaussianity, in the same spirit as Munshi & Heavens skew-spectra were used to identify foreground contaminants to the CMB bispectrum in Planck data. Using a perturbative series to construct the Minkowski functionals (MFs), we provide a pseudo-C_ℓ based approach in both harmonic and needlet representations to estimate these spectra in the presence of a mask and inhomogeneous noise. Assuming homogeneous noise, we present approximate expressions for error covariance for the purpose of joint estimation of these spectra. We present specific results for four different models of primordial non-Gaussianity local, equilateral, orthogonal and enfolded models, as well as non-Gaussianity caused by unsubtracted point sources. Closed form results of next-order corrections to MFs too are obtained in terms of a quadruplet of kurt-spectra. We also use the method of modal decomposition of the bispectrum and trispectrum to reconstruct the MFs as an alternative method of reconstruction of morphological properties of CMB maps. Finally, we introduce the odd-parity skew-spectra to probe the odd-parity bispectrum and its impact on the morphology of the CMB sky. Although developed for the CMB, the generic results obtained here can be useful in other areas of cosmology.
Are neutron stars crushed? Gravitomagnetic tidal fields as a mechanism for binary-induced collapse
DOE Office of Scientific and Technical Information (OSTI.GOV)
Favata, Marc
Numerical simulations of binary neutron stars by Wilson, Mathews, and Marronetti indicated that neutron stars that are stable in isolation can be made to collapse to black holes when placed in a binary. This claim was surprising as it ran counter to the Newtonian expectation that a neutron star in a binary should be more stable, not less. After correcting an error found by Flanagan, Wilson and Mathews found that the compression of the neutron stars was significantly reduced but not eliminated. This has motivated us to ask the following general question: Under what circumstances can general-relativistic tidal interactions causemore » an otherwise stable neutron star to be compressed? We have found that if a nonrotating neutron star possesses a current-quadrupole moment, interactions with a gravitomagnetic tidal field can lead to a compressive force on the star. If this current quadrupole is induced by the gravitomagnetic tidal field, it is related to the tidal field by an equation-of-state-dependent constant called the gravitomagnetic Love number. This is analogous to the Newtonian Love number that relates the strength of a Newtonian tidal field to the induced mass quadrupole moment of a star. The compressive force is almost never larger than the Newtonian tidal interaction that stabilizes the neutron star against collapse. In the case in which a current quadrupole is already present in the star (perhaps as an artifact of a numerical simulation), the compressive force can exceed the stabilizing one, leading to a net increase in the central density of the star. This increase is small (< or approx. 1%) but could, in principle, cause gravitational collapse in a star that is close to its maximum mass. This paper also reviews the history of the Wilson-Mathews-Marronetti controversy and, in an appendix, extends the discussion of tidally induced changes in the central density to rotating stars.« less
A Model of Self-Monitoring Blood Glucose Measurement Error.
Vettoretti, Martina; Facchinetti, Andrea; Sparacino, Giovanni; Cobelli, Claudio
2017-07-01
A reliable model of the probability density function (PDF) of self-monitoring of blood glucose (SMBG) measurement error would be important for several applications in diabetes, like testing in silico insulin therapies. In the literature, the PDF of SMBG error is usually described by a Gaussian function, whose symmetry and simplicity are unable to properly describe the variability of experimental data. Here, we propose a new methodology to derive more realistic models of SMBG error PDF. The blood glucose range is divided into zones where error (absolute or relative) presents a constant standard deviation (SD). In each zone, a suitable PDF model is fitted by maximum-likelihood to experimental data. Model validation is performed by goodness-of-fit tests. The method is tested on two databases collected by the One Touch Ultra 2 (OTU2; Lifescan Inc, Milpitas, CA) and the Bayer Contour Next USB (BCN; Bayer HealthCare LLC, Diabetes Care, Whippany, NJ). In both cases, skew-normal and exponential models are used to describe the distribution of errors and outliers, respectively. Two zones were identified: zone 1 with constant SD absolute error; zone 2 with constant SD relative error. Goodness-of-fit tests confirmed that identified PDF models are valid and superior to Gaussian models used so far in the literature. The proposed methodology allows to derive realistic models of SMBG error PDF. These models can be used in several investigations of present interest in the scientific community, for example, to perform in silico clinical trials to compare SMBG-based with nonadjunctive CGM-based insulin treatments.
Measurement of the electric quadrupole moments of CO2 , CO, N2 , Cl2 and BF3
NASA Astrophysics Data System (ADS)
Graham, C.; Imrie, D. A.; Raab, R. E.
The electric quadrupole moments of a number of molecules have been determined from measurement of the birefringence induced in a gas by an electric field gradient. The values obtained are: carbon dioxide (- 14·27 ± 0·61)x 10-40 C m2, carbon monoxide (- 9·47 ± 0·15)x 10-40 C m2, nitrogen (- 4·65±0·08)x 10-40 C m2 and boron trifluoride (12·6±0·7)x 10-40 C m2. In the calculation of the moments for carbon monoxide and boron trifluoride the small hyperpolarizability contribution was neglected in the absence of known values. By means of the Jones calculus a detailed analysis was made of the effects of strain birefringence in the cell windows and imperfect orientation of polarizing components in the light path. This analysis led to a measurement procedure which yielded results significantly different from previously reported ones obtained with essentially the same apparatus. The probable error in the earlier procedure is identified.
Network Skewness Measures Resilience in Lake Ecosystems
NASA Astrophysics Data System (ADS)
Langdon, P. G.; Wang, R.; Dearing, J.; Zhang, E.; Doncaster, P.; Yang, X.; Yang, H.; Dong, X.; Hu, Z.; Xu, M.; Yanjie, Z.; Shen, J.
2017-12-01
Changes in ecosystem resilience defy straightforward quantification from biodiversity metrics, which ignore influences of community structure. Naturally self-organized network structures show positive skewness in the distribution of node connections. Here we test for skewness reduction in lake diatom communities facing anthropogenic stressors, across a network of 273 lakes in China containing 452 diatom species. Species connections show positively skewed distributions in little-impacted lakes, switching to negative skewness in lakes associated with human settlement, surrounding land-use change, and higher phosphorus concentration. Dated sediment cores reveal a down-shifting of network skewness as human impacts intensify, and reversal with recovery from disturbance. The appearance and degree of negative skew presents a new diagnostic for quantifying system resilience and impacts from exogenous forcing on ecosystem communities.
Continuity diaphragm for skewed continuous span precast prestressed concrete girder bridges.
DOT National Transportation Integrated Search
2004-10-01
Continuity diaphragms used on skewed bents in prestressed girder bridges cause difficulties in detailing and : construction. Details for bridges with large diaphragm skew angles (>30) have not been a problem for LA DOTD. : However, as the skew angl...
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Morrell, F. R.
1981-01-01
Attention is given to a redundant strapdown inertial measurement unit for integrated avionics. The system consists of four two-degree-of-freedom turned rotor gyros and four two-degree-of-freedom accelerometers in a skewed and separable semi-octahedral array. The unit is coupled through instrument electronics to two flight computers which compensate sensor errors. The flight computers are interfaced to the microprocessors and process failure detection, isolation, redundancy management and flight control/navigation algorithms. The unit provides dual fail-operational performance and has data processing frequencies consistent with integrated avionics concepts presently planned.
Paretti, Nicholas V.; Kennedy, Jeffrey R.; Cohn, Timothy A.
2014-01-01
Flooding is among the costliest natural disasters in terms of loss of life and property in Arizona, which is why the accurate estimation of flood frequency and magnitude is crucial for proper structural design and accurate floodplain mapping. Current guidelines for flood frequency analysis in the United States are described in Bulletin 17B (B17B), yet since B17B’s publication in 1982 (Interagency Advisory Committee on Water Data, 1982), several improvements have been proposed as updates for future guidelines. Two proposed updates are the Expected Moments Algorithm (EMA) to accommodate historical and censored data, and a generalized multiple Grubbs-Beck (MGB) low-outlier test. The current guidelines use a standard Grubbs-Beck (GB) method to identify low outliers, changing the determination of the moment estimators because B17B uses a conditional probability adjustment to handle low outliers while EMA censors the low outliers. B17B and EMA estimates are identical if no historical information or censored or low outliers are present in the peak-flow data. EMA with MGB (EMA-MGB) test was compared to the standard B17B (B17B-GB) method for flood frequency analysis at 328 streamgaging stations in Arizona. The methods were compared using the relative percent difference (RPD) between annual exceedance probabilities (AEPs), goodness-of-fit assessments, random resampling procedures, and Monte Carlo simulations. The AEPs were calculated and compared using both station skew and weighted skew. Streamgaging stations were classified by U.S. Geological Survey (USGS) National Water Information System (NWIS) qualification codes, used to denote historical and censored peak-flow data, to better understand the effect that nonstandard flood information has on the flood frequency analysis for each method. Streamgaging stations were also grouped according to geographic flood regions and analyzed separately to better understand regional differences caused by physiography and climate. The B17B-GB and EMA-MGB RPD-boxplot results showed that the median RPDs across all streamgaging stations for the 10-, 1-, and 0.2-percent AEPs, computed using station skew, were approximately zero. As the AEP flow estimates decreased (that is, from 10 to 0.2 percent AEP) the variability in the RPDs increased, indicating that the AEP flow estimate was greater for EMA-MGB when compared to B17B-GB. There was only one RPD greater than 100 percent for the 10- and 1-percent AEP estimates, whereas 19 RPDs exceeded 100 percent for the 0.2-percent AEP. At streamgaging stations with low-outlier data, historical peak-flow data, or both, RPDs ranged from −84 to 262 percent for the 0.2-percent AEP flow estimate. When streamgaging stations were separated by the presence of historical peak-flow data (that is, no low outliers or censored peaks) or by low outlier peak-flow data (no historical data), the results showed that RPD variability was greatest for the 0.2-AEP flow estimates, indicating that the treatment of historical and (or) low-outlier data was different between methods and that method differences were most influential when estimating the less probable AEP flows (1, 0.5, and 0.2 percent). When regional skew information was weighted with the station skew, B17B-GB estimates were generally higher than the EMA-MGB estimates for any given AEP. This was related to the different regional skews and mean square error used in the weighting procedure for each flood frequency analysis. The B17B-GB weighted skew analysis used a more positive regional skew determined in USGS Water Supply Paper 2433 (Thomas and others, 1997), while the EMA-MGB analysis used a more negative regional skew with a lower mean square error determined from a Bayesian generalized least squares analysis. Regional groupings of streamgaging stations reflected differences in physiographic and climatic characteristics. Potentially influential low flows (PILFs) were more prevalent in arid regions of the State, and generally AEP flows were larger with EMA-MGB than with B17B-GB for gaging stations with PILFs. In most cases EMA-MGB curves would fit the largest floods more accurately than B17B-GB. In areas of the State with more baseflow, such as along the Mogollon Rim and the White Mountains, streamgaging stations generally had fewer PILFs and more positive skews, causing estimated AEP flows to be larger with B17B-GB than with EMA-MGB. The effect of including regional skew was similar for all regions, and the observed pattern was increasingly greater B17B-GB flows (more negative RPDs) with each decreasing AEP quantile. A variation on a goodness-of-fit test statistic was used to describe each method’s ability to fit the largest floods. The mean absolute percent difference between the measured peak flows and the log-Pearson Type 3 (LP3)-estimated flows, for each method, was averaged over the 90th, 75th, and 50th percentiles of peak-flow data at each site. In most percentile subsets, EMA-MGB on average had smaller differences (1 to 3 percent) between the observed and fitted value, suggesting that the EMA-MGB-LP3 distribution is fitting the observed peak-flow data more precisely than B17B-GB. The smallest EMA-MGB percent differences occurred for the greatest 10 percent (90th percentile) of the peak-flow data. When stations were analyzed by USGS NWIS peak flow qualification code groups, the stations with historical peak flows and no low outliers had average percent differences as high as 11 percent greater for B17B-GB, indicating that EMA-MGB utilized the historical information to fit the largest observed floods more accurately. A resampling procedure was used in which 1,000 random subsamples were drawn, each comprising one-half of the observed data. An LP3 distribution was fit to each subsample using B17B-GB and EMA-MGB methods, and the predicted 1-percent AEP flows were compared to those generated from distributions fit to the entire dataset. With station skew, the two methods were similar in the median percent difference, but with weighted skew EMA-MGB estimates were generally better. At two gages where B17B-GB appeared to perform better, a large number of peak flows were deemed to be PILFs by the MGB test, although they did not appear to depart significantly from the trend of the data (step or dogleg appearance). At two gages where EMA-MGB performed better, the MGB identified several PILFs that were affecting the fitted distribution of the B17B-GB method. Monte Carlo simulations were run for the LP3 distribution using different skews and with different assumptions about the expected number of historical peaks. The primary benefit of running Monte Carlo simulations is that the underlying distribution statistics are known, meaning that the true 1-percent AEP is known. The results showed that EMA-MGB performed as well or better in situations where the LP3 distribution had a zero or positive skew and historical information. When the skew for the LP3 distribution was negative, EMA-MGB performed significantly better than B17B-GB and EMA-MGB estimates were less biased by more closely estimating the true 1-percent AEP for 1, 2, and 10 historical flood scenarios.
Directionality volatility in electroencephalogram time series
NASA Astrophysics Data System (ADS)
Mansor, Mahayaudin M.; Green, David A.; Metcalfe, Andrew V.
2016-06-01
We compare time series of electroencephalograms (EEGs) from healthy volunteers with EEGs from subjects diagnosed with epilepsy. The EEG time series from the healthy group are recorded during awake state with their eyes open and eyes closed, and the records from subjects with epilepsy are taken from three different recording regions of pre-surgical diagnosis: hippocampal, epileptogenic and seizure zone. The comparisons for these 5 categories are in terms of deviations from linear time series models with constant variance Gaussian white noise error inputs. One feature investigated is directionality, and how this can be modelled by either non-linear threshold autoregressive models or non-Gaussian errors. A second feature is volatility, which is modelled by Generalized AutoRegressive Conditional Heteroskedasticity (GARCH) processes. Other features include the proportion of variability accounted for by time series models, and the skewness and the kurtosis of the residuals. The results suggest these comparisons may have diagnostic potential for epilepsy and provide early warning of seizures.
Kamneva, Olga K; Rosenberg, Noah A
2017-01-01
Hybridization events generate reticulate species relationships, giving rise to species networks rather than species trees. We report a comparative study of consensus, maximum parsimony, and maximum likelihood methods of species network reconstruction using gene trees simulated assuming a known species history. We evaluate the role of the divergence time between species involved in a hybridization event, the relative contributions of the hybridizing species, and the error in gene tree estimation. When gene tree discordance is mostly due to hybridization and not due to incomplete lineage sorting (ILS), most of the methods can detect even highly skewed hybridization events between highly divergent species. For recent divergences between hybridizing species, when the influence of ILS is sufficiently high, likelihood methods outperform parsimony and consensus methods, which erroneously identify extra hybridizations. The more sophisticated likelihood methods, however, are affected by gene tree errors to a greater extent than are consensus and parsimony. PMID:28469378
Fault-tolerant clock synchronization validation methodology. [in computer systems
NASA Technical Reports Server (NTRS)
Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.
1987-01-01
A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.
NASA Astrophysics Data System (ADS)
Wei, Jun; Zhong, Fangyuan
Based on comparative experiment, this paper deals with using tangentially skewed rotor blades in axial-flow fan. It is seen from the comparison of the overall performance of the fan with skewed bladed rotor and radial bladed rotor that the skewed blades operate more efficiently than the radial blades, especially at low volume flows. Meanwhile, decrease in pressure rise and flow rate of axial-flow fan with skewed rotor blades is found. The rotor-stator interaction noise and broadband noise of axial-flow fan are reduced with skewed rotor blades. Forward skewed blades tend to reduce the accumulation of the blade boundary layer in the tip region resulting from the effect of centrifugal forces. The turning of streamlines from the outer radius region into inner radius region in blade passages due to the radial component of blade forces of skewed blades is the main reason for the decrease in pressure rise and flow rate.
Zhang, Jiyang; Ma, Jie; Dou, Lei; Wu, Songfeng; Qian, Xiaohong; Xie, Hongwei; Zhu, Yunping; He, Fuchu
2009-02-01
The hybrid linear trap quadrupole Fourier-transform (LTQ-FT) ion cyclotron resonance mass spectrometer, an instrument with high accuracy and resolution, is widely used in the identification and quantification of peptides and proteins. However, time-dependent errors in the system may lead to deterioration of the accuracy of these instruments, negatively influencing the determination of the mass error tolerance (MET) in database searches. Here, a comprehensive discussion of LTQ/FT precursor ion mass error is provided. On the basis of an investigation of the mass error distribution, we propose an improved recalibration formula and introduce a new tool, FTDR (Fourier-transform data recalibration), that employs a graphic user interface (GUI) for automatic calibration. It was found that the calibration could adjust the mass error distribution to more closely approximate a normal distribution and reduce the standard deviation (SD). Consequently, we present a new strategy, LDSF (Large MET database search and small MET filtration), for database search MET specification and validation of database search results. As the name implies, a large-MET database search is conducted and the search results are then filtered using the statistical MET estimated from high-confidence results. By applying this strategy to a standard protein data set and a complex data set, we demonstrate the LDSF can significantly improve the sensitivity of the result validation procedure.
DNA Asymmetric Strand Bias Affects the Amino Acid Composition of Mitochondrial Proteins
Min, Xiang Jia; Hickey, Donal A.
2007-01-01
Abstract Variations in GC content between genomes have been extensively documented. Genomes with comparable GC contents can, however, still differ in the apportionment of the G and C nucleotides between the two DNA strands. This asymmetric strand bias is known as GC skew. Here, we have investigated the impact of differences in nucleotide skew on the amino acid composition of the encoded proteins. We compared orthologous genes between animal mitochondrial genomes that show large differences in GC and AT skews. Specifically, we compared the mitochondrial genomes of mammals, which are characterized by a negative GC skew and a positive AT skew, to those of flatworms, which show the opposite skews for both GC and AT base pairs. We found that the mammalian proteins are highly enriched in amino acids encoded by CA-rich codons (as predicted by their negative GC and positive AT skews), whereas their flatworm orthologs were enriched in amino acids encoded by GT-rich codons (also as predicted from their skews). We found that these differences in mitochondrial strand asymmetry (measured as GC and AT skews) can have very large, predictable effects on the composition of the encoded proteins. PMID:17974594
Selection on skewed characters and the paradox of stasis
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2018-01-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modelling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold’s (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate samples size. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date – repeatedly described as more evolutionarily stable than expected –, so this skewness should be accounted for when investigating evolutionary dynamics in the wild. PMID:28921508
Design of automatic leveling and centering system of theodolite
NASA Astrophysics Data System (ADS)
Liu, Chun-tong; He, Zhen-Xin; Huang, Xian-xiang; Zhan, Ying
2012-09-01
To realize the theodolite automation and improve the azimuth Angle measurement instrument, the theodolite automatic leveling and centering system with the function of leveling error compensation is designed, which includes the system solution, key components selection, the mechanical structure of leveling and centering, and system software solution. The redesigned leveling feet are driven by the DC servo motor; and the electronic control center device is installed. Using high precision of tilt sensors as horizontal skew detection sensors ensures the effectiveness of the leveling error compensation. Aiming round mark center is located using digital image processing through surface array CCD; and leveling measurement precision can reach the pixel level, which makes the theodolite accurate centering possible. Finally, experiments are conducted using the automatic leveling and centering system of the theodolite. The results show the leveling and centering system can realize automatic operation with high centering accuracy of 0.04mm.The measurement precision of the orientation angle after leveling error compensation is improved, compared with that of in the traditional method. Automatic leveling and centering system of theodolite can satisfy the requirements of the measuring precision and its automation.
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Plasma Electrolyte Distributions in Humans-Normal or Skewed?
Feldman, Mark; Dickson, Beverly
2017-11-01
It is widely believed that plasma electrolyte levels are normally distributed. Statistical tests and calculations using plasma electrolyte data are often reported based on this assumption of normality. Examples include t tests, analysis of variance, correlations and confidence intervals. The purpose of our study was to determine whether plasma sodium (Na + ), potassium (K + ), chloride (Cl - ) and bicarbonate [Formula: see text] distributions are indeed normally distributed. We analyzed plasma electrolyte data from 237 consecutive adults (137 women and 100 men) who had normal results on a standard basic metabolic panel which included plasma electrolyte measurements. The skewness of each distribution (as a measure of its asymmetry) was compared to the zero skewness of a normal (Gaussian) distribution. The plasma Na + distribution was skewed slightly to the right, but the skew was not significantly different from zero skew. The plasma Cl - distribution was skewed slightly to the left, but again the skew was not significantly different from zero skew. On the contrary, both the plasma K + and [Formula: see text] distributions were significantly skewed to the right (P < 0.01 zero skew). There was also a suggestion from examining frequency distribution curves that K + and [Formula: see text] distributions were bimodal. In adults with a normal basic metabolic panel, plasma potassium and bicarbonate levels are not normally distributed and may be bimodal. Thus, statistical methods to evaluate these 2 plasma electrolytes should be nonparametric tests and not parametric ones that require a normal distribution. Copyright © 2017 Southern Society for Clinical Investigation. Published by Elsevier Inc. All rights reserved.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Optimized multiple quantum MAS lineshape simulations in solid state NMR
NASA Astrophysics Data System (ADS)
Brouwer, William J.; Davis, Michael C.; Mueller, Karl T.
2009-10-01
The majority of nuclei available for study in solid state Nuclear Magnetic Resonance have half-integer spin I>1/2, with corresponding electric quadrupole moment. As such, they may couple with a surrounding electric field gradient. This effect introduces anisotropic line broadening to spectra, arising from distinct chemical species within polycrystalline solids. In Multiple Quantum Magic Angle Spinning (MQMAS) experiments, a second frequency dimension is created, devoid of quadrupolar anisotropy. As a result, the center of gravity of peaks in the high resolution dimension is a function of isotropic second order quadrupole and chemical shift alone. However, for complex materials, these parameters take on a stochastic nature due in turn to structural and chemical disorder. Lineshapes may still overlap in the isotropic dimension, complicating the task of assignment and interpretation. A distributed computational approach is presented here which permits simulation of the two-dimensional MQMAS spectrum, generated by random variates from model distributions of isotropic chemical and quadrupole shifts. Owing to the non-convex nature of the residual sum of squares (RSS) function between experimental and simulated spectra, simulated annealing is used to optimize the simulation parameters. In this manner, local chemical environments for disordered materials may be characterized, and via a re-sampling approach, error estimates for parameters produced. Program summaryProgram title: mqmasOPT Catalogue identifier: AEEC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEEC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 3650 No. of bytes in distributed program, including test data, etc.: 73 853 Distribution format: tar.gz Programming language: C, OCTAVE Computer: UNIX/Linux Operating system: UNIX/Linux Has the code been vectorised or parallelized?: Yes RAM: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 3.5M, SMP AMD opteron Classification: 2.3 External routines: OCTAVE ( http://www.gnu.org/software/octave/), GNU Scientific Library ( http://www.gnu.org/software/gsl/), OPENMP ( http://openmp.org/wp/) Nature of problem: The optimal simulation and modeling of multiple quantum magic angle spinning NMR spectra, for general systems, especially those with mild to significant disorder. The approach outlined and implemented in C and OCTAVE also produces model parameter error estimates. Solution method: A model for each distinct chemical site is first proposed, for the individual contribution of crystallite orientations to the spectrum. This model is averaged over all powder angles [1], as well as the (stochastic) parameters; isotropic chemical shift and quadrupole coupling constant. The latter is accomplished via sampling from a bi-variate Gaussian distribution, using the Box-Muller algorithm to transform Sobol (quasi) random numbers [2]. A simulated annealing optimization is performed, and finally the non-linear jackknife [3] is applied in developing model parameter error estimates. Additional comments: The distribution contains a script, mqmasOpt.m, which runs in the OCTAVE language workspace. Running time: Example: (1597 powder angles) × (200 Samples) × (81 F2 frequency pts) × (31 F1 frequency points) = 58.35 seconds, SMP AMD opteron. References:S.K. Zaremba, Annali di Matematica Pura ed Applicata 73 (1966) 293. H. Niederreiter, Random Number Generation and Quasi-Monte Carlo Methods, SIAM, 1992. T. Fox, D. Hinkley, K. Larntz, Technometrics 22 (1980) 29.
Moran, John L; Solomon, Patricia J
2012-05-16
For the analysis of length-of-stay (LOS) data, which is characteristically right-skewed, a number of statistical estimators have been proposed as alternatives to the traditional ordinary least squares (OLS) regression with log dependent variable. Using a cohort of patients identified in the Australian and New Zealand Intensive Care Society Adult Patient Database, 2008-2009, 12 different methods were used for estimation of intensive care (ICU) length of stay. These encompassed risk-adjusted regression analysis of firstly: log LOS using OLS, linear mixed model [LMM], treatment effects, skew-normal and skew-t models; and secondly: unmodified (raw) LOS via OLS, generalised linear models [GLMs] with log-link and 4 different distributions [Poisson, gamma, negative binomial and inverse-Gaussian], extended estimating equations [EEE] and a finite mixture model including a gamma distribution. A fixed covariate list and ICU-site clustering with robust variance were utilised for model fitting with split-sample determination (80%) and validation (20%) data sets, and model simulation was undertaken to establish over-fitting (Copas test). Indices of model specification using Bayesian information criterion [BIC: lower values preferred] and residual analysis as well as predictive performance (R2, concordance correlation coefficient (CCC), mean absolute error [MAE]) were established for each estimator. The data-set consisted of 111663 patients from 131 ICUs; with mean(SD) age 60.6(18.8) years, 43.0% were female, 40.7% were mechanically ventilated and ICU mortality was 7.8%. ICU length-of-stay was 3.4(5.1) (median 1.8, range (0.17-60)) days and demonstrated marked kurtosis and right skew (29.4 and 4.4 respectively). BIC showed considerable spread, from a maximum of 509801 (OLS-raw scale) to a minimum of 210286 (LMM). R2 ranged from 0.22 (LMM) to 0.17 and the CCC from 0.334 (LMM) to 0.149, with MAE 2.2-2.4. Superior residual behaviour was established for the log-scale estimators. There was a general tendency for over-prediction (negative residuals) and for over-fitting, the exception being the GLM negative binomial estimator. The mean-variance function was best approximated by a quadratic function, consistent with log-scale estimation; the link function was estimated (EEE) as 0.152(0.019, 0.285), consistent with a fractional-root function. For ICU length of stay, log-scale estimation, in particular the LMM, appeared to be the most consistently performing estimator(s). Neither the GLM variants nor the skew-regression estimators dominated.
Selection on skewed characters and the paradox of stasis.
Bonamour, Suzanne; Teplitsky, Céline; Charmantier, Anne; Crochet, Pierre-André; Chevin, Luis-Miguel
2017-11-01
Observed phenotypic responses to selection in the wild often differ from predictions based on measurements of selection and genetic variance. An overlooked hypothesis to explain this paradox of stasis is that a skewed phenotypic distribution affects natural selection and evolution. We show through mathematical modeling that, when a trait selected for an optimum phenotype has a skewed distribution, directional selection is detected even at evolutionary equilibrium, where it causes no change in the mean phenotype. When environmental effects are skewed, Lande and Arnold's (1983) directional gradient is in the direction opposite to the skew. In contrast, skewed breeding values can displace the mean phenotype from the optimum, causing directional selection in the direction of the skew. These effects can be partitioned out using alternative selection estimates based on average derivatives of individual relative fitness, or additive genetic covariances between relative fitness and trait (Robertson-Price identity). We assess the validity of these predictions using simulations of selection estimation under moderate sample sizes. Ecologically relevant traits may commonly have skewed distributions, as we here exemplify with avian laying date - repeatedly described as more evolutionarily stable than expected - so this skewness should be accounted for when investigating evolutionary dynamics in the wild. © 2017 The Author(s). Evolution © 2017 The Society for the Study of Evolution.
Reward skewness coding in the insula independent of probability and loss
Tobler, Philippe N.
2011-01-01
Rewards in the natural environment are rarely predicted with complete certainty. Uncertainty relating to future rewards has typically been defined as the variance of the potential outcomes. However, the asymmetry of predicted reward distributions, known as skewness, constitutes a distinct but neuroscientifically underexplored risk term that may also have an impact on preference. By changing only reward magnitudes, we study skewness processing in equiprobable ternary lotteries involving only gains and constant probabilities, thus excluding probability distortion or loss aversion as mechanisms for skewness preference formation. We show that individual preferences are sensitive to not only the mean and variance but also to the skewness of predicted reward distributions. Using neuroimaging, we show that the insula, a structure previously implicated in the processing of reward-related uncertainty, responds to the skewness of predicted reward distributions. Some insula responses increased in a monotonic fashion with skewness (irrespective of individual skewness preferences), whereas others were similarly elevated to both negative and positive as opposed to no reward skew. These data support the notion that the asymmetry of reward distributions is processed in the brain and, taken together with replicated findings of mean coding in the striatum and variance coding in the cingulate, suggest that the brain codes distinct aspects of reward distributions in a distributed fashion. PMID:21849610
No evidence that skewing of X chromosome inactivation patterns is transmitted to offspring in humans
Bolduc, Véronique; Chagnon, Pierre; Provost, Sylvie; Dubé, Marie-Pierre; Belisle, Claude; Gingras, Marianne; Mollica, Luigina; Busque, Lambert
2007-01-01
Skewing of X chromosome inactivation (XCI) can occur in normal females and increases in tissues with age. The mechanisms underlying skewing in normal females, however, remain controversial. To better understand the phenomenon of XCI in nondisease states, we evaluated XCI patterns in epithelial and hematopoietic cells of over 500 healthy female mother-neonate pairs. The incidence of skewing observed in mothers was twice that observed in neonates, and in both cohorts, the incidence of XCI was lower in epithelial cells than hematopoietic cells. These results suggest that XCI incidence varies by tissue type and that age-dependent mechanisms can influence skewing in both epithelial and hematopoietic cells. In both cohorts, a correlation was identified in the direction of skewing in epithelial and hematopoietic cells, suggesting common underlying skewing mechanisms across tissues. However, there was no correlation between the XCI patterns of mothers and their respective neonates, and skewed mothers gave birth to skewed neonates at the same frequency as nonskewed mothers. Taken together, our data suggest that in humans, the XCI pattern observed at birth does not reflect a single heritable genetic locus, but rather corresponds to a complex trait determined, at least in part, by selection biases occurring after XCI. PMID:18097474
NASA Technical Reports Server (NTRS)
Shapiro, I. I.; Counselman, C. C., III
1975-01-01
The uses of radar observations of planets and very-long-baseline radio interferometric observations of extragalactic objects to test theories of gravitation are described in detail with special emphasis on sources of error. The accuracy achievable in these tests with data already obtained, can be summarized in terms of: retardation of signal propagation (radar), deflection of radio waves (interferometry), advance of planetary perihelia (radar), gravitational quadrupole moment of sun (radar), and time variation of gravitational constant (radar). The analyses completed to date have yielded no significant disagreement with the predictions of general relativity.
Linear optics measurements and corrections using an AC dipole in RHIC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, G.; Bai, M.; Yang, L.
2010-05-23
We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.
Using Perturbative Least Action to Reconstruct Redshift-Space Distortions
NASA Astrophysics Data System (ADS)
Goldberg, David M.
2001-05-01
In this paper, we present a redshift-space reconstruction scheme that is analogous to and extends the perturbative least action (PLA) method described by Goldberg & Spergel. We first show that this scheme is effective in reconstructing even nonlinear observations. We then suggest that by varying the cosmology to minimize the quadrupole moment of a reconstructed density field, it may be possible to lower the error bars on the redshift distortion parameter, β, as well as to break the degeneracy between the linear bias parameter, b, and ΩM. Finally, we discuss how PLA might be applied to realistic redshift surveys.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations.
Kugelman, Jeffrey R; Wiley, Michael R; Nagle, Elyse R; Reyes, Daniel; Pfeffer, Brad P; Kuhn, Jens H; Sanchez-Lockhart, Mariano; Palacios, Gustavo F
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic "no amplification" method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a "targeted" amplification method, sequence-independent single-primer amplification (SISPA) as a "random" amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced "no amplification" method, and Illumina TruSeq RNA Access as a "targeted" enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4-5) of all compared methods.
Error baseline rates of five sample preparation methods used to characterize RNA virus populations
Kugelman, Jeffrey R.; Wiley, Michael R.; Nagle, Elyse R.; Reyes, Daniel; Pfeffer, Brad P.; Kuhn, Jens H.; Sanchez-Lockhart, Mariano; Palacios, Gustavo F.
2017-01-01
Individual RNA viruses typically occur as populations of genomes that differ slightly from each other due to mutations introduced by the error-prone viral polymerase. Understanding the variability of RNA virus genome populations is critical for understanding virus evolution because individual mutant genomes may gain evolutionary selective advantages and give rise to dominant subpopulations, possibly even leading to the emergence of viruses resistant to medical countermeasures. Reverse transcription of virus genome populations followed by next-generation sequencing is the only available method to characterize variation for RNA viruses. However, both steps may lead to the introduction of artificial mutations, thereby skewing the data. To better understand how such errors are introduced during sample preparation, we determined and compared error baseline rates of five different sample preparation methods by analyzing in vitro transcribed Ebola virus RNA from an artificial plasmid-based system. These methods included: shotgun sequencing from plasmid DNA or in vitro transcribed RNA as a basic “no amplification” method, amplicon sequencing from the plasmid DNA or in vitro transcribed RNA as a “targeted” amplification method, sequence-independent single-primer amplification (SISPA) as a “random” amplification method, rolling circle reverse transcription sequencing (CirSeq) as an advanced “no amplification” method, and Illumina TruSeq RNA Access as a “targeted” enrichment method. The measured error frequencies indicate that RNA Access offers the best tradeoff between sensitivity and sample preparation error (1.4−5) of all compared methods. PMID:28182717
Lattice Commissioning Stretgy Simulation for the B Factory
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lee, M.; Whittum, D.; Yan, Y.
2011-08-26
To prepare for the PEP-II turn on, we have studied one commissioning strategy with simulated lattice errors. Features such as difference and absolute orbit analysis and correction are discussed. To prepare for the commissioning of the PEP-II injection line and high energy ring (HER), we have developed a system for on-line orbit analysis by merging two existing codes: LEGO and RESOLVE. With the LEGO-RESOLVE system, we can study the problem of finding quadrupole alignment and beam position (BPM) offset errors with simulated data. We have increased the speed and versatility of the orbit analysis process by using a command filemore » written in a script language designed specifically for RESOLVE. In addition, we have interfaced the LEGO-RESOLVE system to the control system of the B-Factory. In this paper, we describe online analysis features of the LEGO-RESOLVE system and present examples of practical applications.« less
Model studies of the beam-filling error for rain-rate retrieval with microwave radiometers
NASA Technical Reports Server (NTRS)
Ha, Eunho; North, Gerald R.
1995-01-01
Low-frequency (less than 20 GHz) single-channel microwave retrievals of rain rate encounter the problem of beam-filling error. This error stems from the fact that the relationship between microwave brightness temperature and rain rate is nonlinear, coupled with the fact that the field of view is large or comparable to important scales of variability of the rain field. This means that one may not simply insert the area average of the brightness temperature into the formula for rain rate without incurring both bias and random error. The statistical heterogeneity of the rain-rate field in the footprint of the instrument is key to determining the nature of these errors. This paper makes use of a series of random rain-rate fields to study the size of the bias and random error associated with beam filling. A number of examples are analyzed in detail: the binomially distributed field, the gamma, the Gaussian, the mixed gamma, the lognormal, and the mixed lognormal ('mixed' here means there is a finite probability of no rain rate at a point of space-time). Of particular interest are the applicability of a simple error formula due to Chiu and collaborators and a formula that might hold in the large field of view limit. It is found that the simple formula holds for Gaussian rain-rate fields but begins to fail for highly skewed fields such as the mixed lognormal. While not conclusively demonstrated here, it is suggested that the notionof climatologically adjusting the retrievals to remove the beam-filling bias is a reasonable proposition.
Utility functions predict variance and skewness risk preferences in monkeys
Genest, Wilfried; Stauffer, William R.; Schultz, Wolfram
2016-01-01
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals’ preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals’ preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys’ choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences. PMID:27402743
Utility functions predict variance and skewness risk preferences in monkeys.
Genest, Wilfried; Stauffer, William R; Schultz, Wolfram
2016-07-26
Utility is the fundamental variable thought to underlie economic choices. In particular, utility functions are believed to reflect preferences toward risk, a key decision variable in many real-life situations. To assess the validity of utility representations, it is therefore important to examine risk preferences. In turn, this approach requires formal definitions of risk. A standard approach is to focus on the variance of reward distributions (variance-risk). In this study, we also examined a form of risk related to the skewness of reward distributions (skewness-risk). Thus, we tested the extent to which empirically derived utility functions predicted preferences for variance-risk and skewness-risk in macaques. The expected utilities calculated for various symmetrical and skewed gambles served to define formally the direction of stochastic dominance between gambles. In direct choices, the animals' preferences followed both second-order (variance) and third-order (skewness) stochastic dominance. Specifically, for gambles with different variance but identical expected values (EVs), the monkeys preferred high-variance gambles at low EVs and low-variance gambles at high EVs; in gambles with different skewness but identical EVs and variances, the animals preferred positively over symmetrical and negatively skewed gambles in a strongly transitive fashion. Thus, the utility functions predicted the animals' preferences for variance-risk and skewness-risk. Using these well-defined forms of risk, this study shows that monkeys' choices conform to the internal reward valuations suggested by their utility functions. This result implies a representation of utility in monkeys that accounts for both variance-risk and skewness-risk preferences.
ERIC Educational Resources Information Center
Tabor, Josh
2010-01-01
On the 2009 AP[c] Statistics Exam, students were asked to create a statistic to measure skewness in a distribution. This paper explores several of the most popular student responses and evaluates which statistic performs best when sampling from various skewed populations. (Contains 8 figures, 3 tables, and 4 footnotes.)
Field, J; Solís, C R; Queller, D C; Strassmann, J E
1998-06-01
Recent models postulate that the members of a social group assess their ecological and social environments and agree a "social contract" of reproductive partitioning (skew). We tested social contracts theory by using DNA microsatellites to measure skew in 24 cofoundress associations of paper wasps, Polistes bellicosus. In contrast to theoretical predictions, there was little variation in cofoundress relatedness, and relatedness either did not predict skew or was negatively correlated with it; the dominant/subordinate size ratio, assumed to reflect relative fighting ability, did not predict skew; and high skew was associated with decreased aggression by the rank 2 subordinate toward the dominant. High skew was associated with increased group size. A difficulty with measuring skew in real systems is the frequent changes in group composition that commonly occur in social animals. In P. bellicosus, 61% of egg layers and an unknown number of non-egg layers were absent by the time nests were collected. The social contracts models provide an attractive general framework linking genetics, ecology, and behavior, but there have been few direct tests of their predictions. We question assumptions underlying the models and suggest directions for future research.
Metric adjusted skew information
Hansen, Frank
2008-01-01
We extend the concept of Wigner–Yanase–Dyson skew information to something we call “metric adjusted skew information” (of a state with respect to a conserved observable). This “skew information” is intended to be a non-negative quantity bounded by the variance (of an observable in a state) that vanishes for observables commuting with the state. We show that the skew information is a convex function on the manifold of states. It also satisfies other requirements, proposed by Wigner and Yanase, for an effective measure-of-information content of a state relative to a conserved observable. We establish a connection between the geometrical formulation of quantum statistics as proposed by Chentsov and Morozova and measures of quantum information as introduced by Wigner and Yanase and extended in this article. We show that the set of normalized Morozova–Chentsov functions describing the possible quantum statistics is a Bauer simplex and determine its extreme points. We determine a particularly simple skew information, the “λ-skew information,” parametrized by a λ ∈ (0, 1], and show that the convex cone this family generates coincides with the set of all metric adjusted skew informations. PMID:18635683
NASA Astrophysics Data System (ADS)
Bayat, M.; Daneshjoo, F.; Nisticò, N.
2017-01-01
In this study the probable seismic behavior of skewed bridges with continuous decks under earthquake excitations from different directions is investigated. A 45° skewed bridge is studied. A suite of 20 records is used to perform an Incremental Dynamic Analysis (IDA) for fragility curves. Four different earthquake directions have been considered: -45°, 0°, 22.5°, 45°. A sensitivity analysis on different spectral intensity meas ures is presented; efficiency and practicality of different intensity measures have been studied. The fragility curves obtained indicate that the critical direction for skewed bridges is the skew direction as well as the longitudinal direction. The study shows the importance of finding the most critical earthquake in understanding and predicting the behavior of skewed bridges.
NASA Technical Reports Server (NTRS)
Bryant, W. H.; Morrell, F. R.
1981-01-01
An experimental redundant strapdown inertial measurement unit (RSDIMU) is developed as a link to satisfy safety and reliability considerations in the integrated avionics concept. The unit includes four two degree-of-freedom tuned rotor gyros, and four accelerometers in a skewed and separable semioctahedral array. These sensors are coupled to four microprocessors which compensate sensor errors. These microprocessors are interfaced with two flight computers which process failure detection, isolation, redundancy management, and general flight control/navigation algorithms. Since the RSDIMU is a developmental unit, it is imperative that the flight computers provide special visibility and facility in algorithm modification.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord; Cornett, Frank N.
2006-04-18
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. Each of the plurality of delayed signals is compared to a reference signal to detect changes in the skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in the detected skew.
NASA Astrophysics Data System (ADS)
Lackenby, B. G. C.; Flambaum, V. V.
2018-07-01
We introduce the weak quadrupole moment (WQM) of nuclei, related to the quadrupole distribution of the weak charge in the nucleus. The WQM produces a tensor weak interaction between the nucleus and electrons and can be observed in atomic and molecular experiments measuring parity nonconservation. The dominating contribution to the weak quadrupole is given by the quadrupole moment of the neutron distribution, therefore, corresponding experiments should allow one to measure the neutron quadrupoles. Using the deformed oscillator model and the Schmidt model we calculate the quadrupole distributions of neutrons, Q n , the WQMs, {Q}W(2), and the Lorentz invariance violating energy shifts in 9Be, 21Ne, 27Al, 131Xe, 133Cs, 151Eu, 153Eu, 163Dy, 167Er, 173Yb, 177Hf, 179Hf, 181Ta, 201Hg and 229Th.
Kennedy, Joseph H; Wiseman, Justin M
2010-02-01
The present work describes the methodology and investigates the performance of desorption electrospray ionization (DESI) combined with a triple quadrupole mass spectrometer for the quantitation of small drug molecules in human plasma. Amoxepine, atenolol, carbamazepine, clozapine, prazosin, propranolol and verapamil were selected as target analytes while terfenadine was selected as the internal standard common to each of the analytes. Protein precipitation of human plasma using acetonitrile was utilized for all samples. Limits of detection were determined for all analytes in plasma and shown to be in the range 0.2-40 ng/mL. Quantitative analysis of amoxepine, prazosin and verapamil was performed over the range 20-7400 ng/mL and shown to be linear in all cases with R(2) >0.99. In most cases, the precision (relative standard deviation) and accuracy (relative error) of each method were less than or equal to 20%, respectively. The performance of the combined techniques made it possible to analyze each sample in 15 s illustrating DESI tandem mass spectrometry (MS/MS) as powerful tool for the quantitation of analytes in deproteinized human plasma. Copyright 2010 John Wiley & Sons, Ltd.
Eight piece quadrupole magnet, method for aligning quadrupole magent pole tips
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jaski, Mark S.; Liu, Jie; Donnelly, Aric T.
The invention provides an alternative to the standard 2-piece or 4-piece quadrupole. For example, an 8-piece and a 10-piece quadrupole are provided whereby the tips of each pole may be adjustable. Also provided is a method for producing a quadrupole using standard machining techniques but which results in a final tolerance accuracy of the resulting construct which is better than that obtained using standard machining techniques.
Investigating the detection of multi-homed devices independent of operating systems
2017-09-01
timestamp data was used to estimate clock skews using linear regression and linear optimization methods. Analysis revealed that detection depends on...the consistency of the estimated clock skew. Through vertical testing, it was also shown that clock skew consistency depends on the installed...optimization methods. Analysis revealed that detection depends on the consistency of the estimated clock skew. Through vertical testing, it was also
Meng, Liang; Zhu, Binling; Zheng, Kefang; Fu, Shanlin
2017-05-15
A novel microextraction technique based on ultrasound-assisted low-density solvent dispersive liquid-liquid microextraction (UA-LDS-DLLME) had been applied for the determination of 4 designer benzodiazepines (phenazepam, diclazepam, flubromazepam and etizolam) in urine samples by gas chromatography- triple quadrupole mass spectrometry (GC-QQQ-MS). Ethyl acetate (168μL) was added into the urine samples after adjusting pH to 11.3. The samples were sonicated in an ultrasonic bath for 5.5min to form a cloudy suspension. After centrifugation at 10000rpm for 3min, the supernatant extractant was withdrawn and injected into the GC-QQQ-MS for analysis. Parameters affecting the extraction efficiency have been investigated and optimized by means of single factor experiment and response surface methodology (Box-Behnken design). Under the optimum extraction conditions, a recovery of 73.8-85.5% were obtained for all analytes. The analytical method was linear for all analytes in the range from 0.003 to 10μg/mL with the correlation coefficient ranging from 0.9978 to 0.9990. The LODs were estimated to be 1-3ng/mL. The accuracy (expressed as mean relative error MRE) was within ±5.8% and the precision (expressed as relative standard error RSD) was less than 5.9%. UA-LDS-DLLME technique has the advantages of shorter extraction time and is suitable for simultaneous pretreatment of samples in batches. The combination of UA-LDS-DLLME with GC-QQQ-MS offers an alternative analytical approach for the sensitive detection of these designer benzodiazepines in urine matrix for clinical and medico-legal purposes. Copyright © 2017 Elsevier B.V. All rights reserved.
Supersonic Quadrupole Noise Theory for High-Speed Helicopter Rotors
NASA Technical Reports Server (NTRS)
Farassat, F.; Brentner, Kenneth S.
1997-01-01
High-speed helicopter rotor impulsive noise prediction is an important problem of aeroacoustics. The deterministic quadrupoles have been shown to contribute significantly to high-speed impulsive (HSI) noise of rotors, particularly when the phenomenon of delocalization occurs. At high rotor-tip speeds, some of the quadrupole sources lie outside the sonic circle and move at supersonic speed. Brentner has given a formulation suitable for efficient prediction of quadrupole noise inside the sonic circle. In this paper, we give a simple formulation based on the acoustic analogy that is valid for both subsonic and supersonic quadrupole noise prediction. Like the formulation of Brentner, the model is exact for an observer in the far field and in the rotor plane and is approximate elsewhere. We give the full analytic derivation of this formulation in the paper. We present the method of implementation on a computer for supersonic quadrupoles using marching cubes for constructing the influence surface (Sigma surface) of an observer space- time variable (x; t). We then present several examples of noise prediction for both subsonic and supersonic quadrupoles. It is shown that in the case of transonic flow over rotor blades, the inclusion of the supersonic quadrupoles improves the prediction of the acoustic pressure signature. We show the equivalence of the new formulation to that of Brentner for subsonic quadrupoles. It is shown that the regions of high quadrupole source strength are primarily produced by the shock surface and the flow over the leading edge of the rotor. The primary role of the supersonic quadrupoles is to increase the width of a strong acoustic signal.
Image statistics and the perception of surface gloss and lightness.
Kim, Juno; Anderson, Barton L
2010-07-01
Despite previous data demonstrating the critical importance of 3D surface geometry in the perception of gloss and lightness, I. Motoyoshi, S. Nishida, L. Sharan, and E. H. Adelson (2007) recently proposed that a simple image statistic--histogram or sub-band skew--is computed by the visual system to infer the gloss and albedo of surfaces. One key source of evidence used to support this claim was an experiment in which adaptation to skewed image statistics resulted in opponent aftereffects in observers' judgments of gloss and lightness. We report a series of adaptation experiments that were designed to assess the cause of these aftereffects. We replicated their original aftereffects in gloss but found no consistent aftereffect in lightness. We report that adaptation to zero-skew adaptors produced similar aftereffects as positively skewed adaptors, and that negatively skewed adaptors induced no reliable aftereffects. We further find that the adaptation effect observed with positively skewed adaptors is not robust to changes in mean luminance that diminish the intensity of the luminance extrema. Finally, we show that adaptation to positive skew reduces (rather than increases) the apparent lightness of light pigmentation on non-uniform albedo surfaces. These results challenge the view that the adaptation results reported by Motoyoshi et al. (2007) provide evidence that skew is explicitly computed by the visual system.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
Griffis, V.W.; Stedinger, Jery R.; Cohn, T.A.
2004-01-01
The recently developed expected moments algorithm (EMA) [Cohn et al., 1997] does as well as maximum likelihood estimations at estimating log‐Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Log Pearson type 3 quantile estimators with regional skew information and low outlier adjustments
NASA Astrophysics Data System (ADS)
Griffis, V. W.; Stedinger, J. R.; Cohn, T. A.
2004-07-01
The recently developed expected moments algorithm (EMA) [, 1997] does as well as maximum likelihood estimations at estimating log-Pearson type 3 (LP3) flood quantiles using systematic and historical flood information. Needed extensions include use of a regional skewness estimator and its precision to be consistent with Bulletin 17B. Another issue addressed by Bulletin 17B is the treatment of low outliers. A Monte Carlo study compares the performance of Bulletin 17B using the entire sample with and without regional skew with estimators that use regional skew and censor low outliers, including an extended EMA estimator, the conditional probability adjustment (CPA) from Bulletin 17B, and an estimator that uses probability plot regression (PPR) to compute substitute values for low outliers. Estimators that neglect regional skew information do much worse than estimators that use an informative regional skewness estimator. For LP3 data the low outlier rejection procedure generally results in no loss of overall accuracy, and the differences between the MSEs of the estimators that used an informative regional skew are generally modest in the skewness range of real interest. Samples contaminated to model actual flood data demonstrate that estimators which give special treatment to low outliers significantly outperform estimators that make no such adjustment.
Individual differences in loss aversion and preferences for skewed risks across adulthood.
Seaman, Kendra L; Green, Mikella A; Shu, Stephen; Samanez-Larkin, Gregory R
2018-06-01
In a previous study, we found adult age differences in the tendency to accept more positively skewed gambles (with a small chance of a large win) than other equivalent risks, or an age-related positive-skew bias. In the present study, we examined whether loss aversion explained this bias. A total of 508 healthy participants (ages 21-82) completed measures of loss aversion and skew preference. Age was not related to loss aversion. Although loss aversion was a significant predictor of gamble acceptance, it did not influence the age-related positive-skew bias. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Tidal Love numbers and moment-Love relations of polytropic stars
NASA Astrophysics Data System (ADS)
Yip, Kenny L. S.; Leung, P. T.
2017-12-01
The physical significance of tidal deformation in astronomical systems has long been known. The recently discovered universal I-Love-Q relations, which connect moment of inertia, quadrupole tidal Love number and spin-induced quadrupole moment of compact stars, also underscore the special role of tidal deformation in gravitational wave astronomy. Motivated by the observation that such relations also prevail in Newtonian stars and crucially depend on the stiffness of a star, we consider the tidal Love numbers of Newtonian polytropic stars whose stiffness is characterized by a polytropic index n. We first perturbatively solve the Lane-Emden equation governing the profile of polytropic stars through the application of the scaled delta expansion method and then formulate perturbation series for the multipolar tidal Love number about the two exactly solvable cases with n = 0 and n = 1, respectively. Making use of these two series to form a two-point Padé approximant, we find an approximate expression of the quadrupole tidal Love number, whose error is less than 2.5 × 10-5 per cent (0.39 per cent) for n ∈ [0, 1] (n ∈ [0, 3]). Similarly, we also determine the mass moments for polytropic stars accurately. Based on these findings, we are able to show that the I-Love-Q relations are in general stationary about the incompressible limit irrespective of the equation of state of a star. Moreover, for the I-Love-Q relations, there is a secondary stationary point near n ≈ 0.4444, thus showing the insensitivity to n for n ∈ [0, 1]. Our investigation clearly tracks the universality of the I-Love-Q relations from their validity for stiff stars such as neutron stars to their breakdown for soft stars.
NASA Astrophysics Data System (ADS)
Virtanen, Ilpo; Mursula, Kalevi
2016-06-01
Aims: We study the long-term evolution of photospheric and coronal magnetic fields and the heliospheric current sheet (HCS), especially its north-south asymmetry. Special attention is paid to the reliability of the six data sets used in this study and to the consistency of the results based on these data sets. Methods: We use synoptic maps constructed from Wilcox Solar Observatory (WSO), Mount Wilson Observatory (MWO), Kitt Peak (KP), SOLIS, SOHO/MDI, and SDO/HMI measurements of the photospheric field and the potential field source surface (PFSS) model. Results: The six data sets depict a fairly similar long-term evolution of magnetic fields and the heliospheric current sheet, including polarity reversals and hemispheric asymmetry. However, there are time intervals of several years long, when first KP measurements in the 1970s and 1980s, and later WSO measurements in the 1990s and early 2000s, significantly deviate from the other simultaneous data sets, reflecting likely errors at these times. All of the six magnetographs agree on the southward shift of the heliospheric current sheet (the so-called bashful ballerina phenomenon) in the declining to minimum phase of the solar cycle during a few years of the five included cycles. We show that during solar cycles 20-22, the southward shift of the HCS is mainly due to the axial quadrupole term, reflecting the stronger magnetic field intensity at the southern pole during these times. During cycle 23 the asymmetry is less persistent and mainly due to higher harmonics than the quadrupole term. Currently, in the early declining phase of cycle 24, the HCS is also shifted southward and is mainly due to the axial quadrupole as for most earlier cycles. This further emphasizes the special character of the global solar field during cycle 23.
Flow in Rotating Serpentine Coolant Passages With Skewed Trip Strips
NASA Technical Reports Server (NTRS)
Tse, David G.N.; Steuber, Gary
1996-01-01
Laser velocimetry was utilized to map the velocity field in serpentine turbine blade cooling passages with skewed trip strips. The measurements were obtained at Reynolds and Rotation numbers of 25,000 and 0.24 to assess the influence of trips, passage curvature and Coriolis force on the flow field. The interaction of the secondary flows induced by skewed trips with the passage rotation produces a swirling vortex and a corner recirculation zone. With trips skewed at +45 deg, the secondary flows remain unaltered as the cross-flow proceeds from the passage to the turn. However, the flow characteristics at these locations differ when trips are skewed at -45 deg. Changes in the flow structure are expected to augment heat transfer, in agreement with the heat transfer measurements of Johnson, et al. The present results show that trips are skewed at -45 deg in the outward flow passage and trips are skewed at +45 deg in the inward flow passage maximize heat transfer. Details of the present measurements were related to the heat transfer measurements of Johnson, et al. to relate fluid flow and heat transfer measurements.
NASA Technical Reports Server (NTRS)
Farassat, F.; Brentner, Kenneth S.
1991-01-01
It is presently noted that, for an observer in or near the plane containing a helicopter rotor disk, and in the far field, part of the volume quadrupole sources, and the blade and wake surface quadrupole sources, completely cancel out. This suggests a novel quadrupole source description for the Ffowcs Williams-Hawkings equation which retain quadrupoles with axes parallel to the rotor disk; in this case, the volume and shock surface sourse terms are dominant.
Parrett, Charles; Veilleux, Andrea; Stedinger, J.R.; Barth, N.A.; Knifong, Donna L.; Ferris, J.C.
2011-01-01
Improved flood-frequency information is important throughout California in general and in the Sacramento-San Joaquin River Basin in particular, because of an extensive network of flood-control levees and the risk of catastrophic flooding. A key first step in updating flood-frequency information is determining regional skew. A Bayesian generalized least squares (GLS) regression method was used to derive a regional-skew model based on annual peak-discharge data for 158 long-term (30 or more years of record) stations throughout most of California. The desert areas in southeastern California had too few long-term stations to reliably determine regional skew for that hydrologically distinct region; therefore, the desert areas were excluded from the regional skew analysis for California. Of the 158 long-term stations used to determine regional skew, 145 have minimally regulated annual-peak discharges, and 13 stations are dam sites for which unregulated peak discharges were estimated from unregulated daily maximum discharge data furnished by the U.S. Army Corp of Engineers. Station skew was determined by using an expected moments algorithm (EMA) program for fitting the Pearson Type 3 flood-frequency distribution to the logarithms of annual peak-discharge data. The Bayesian GLS regression method previously developed was modified because of the large cross correlations among concurrent recorded peak discharges in California and the use of censored data and historical flood information with the new expected moments algorithm. In particular, to properly account for these cross-correlation problems and develop a suitable regression model and regression diagnostics, a combination of Bayesian weighted least squares and generalized least squares regression was adopted. This new methodology identified a nonlinear function relating regional skew to mean basin elevation. The regional skew values ranged from -0.62 for a mean basin elevation of zero to 0.61 for a mean basin elevation of 11,000 feet. This relation between skew and elevation reflects the interaction of snow with rain, which increases with increased elevation. The equivalent record length for the new regional skew ranges from 52 to 65 years of record, depending upon mean basin elevation. The old regional skew map in Bulletin 17B, published by the Hydrology Subcommittee of the Interagency Advisory Committee on Water Data (1982), reported an equivalent record length of only 17 years. The newly developed regional skew relation for California was used to update flood frequency for the 158 sites used in the regional skew analysis as well as 206 selected sites in the Sacramento-San Joaquin River Basin. For these sites, annual-peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years were determined on the basis of data through water year 2006. The expected moments algorithm was used for determining the magnitude and frequency of floods at gaged sites by using regional skew values and using the basic approach outlined in Bulletin
NASA Astrophysics Data System (ADS)
Sun, Ruochen; Yuan, Huiling; Liu, Xiaoli
2017-11-01
The heteroscedasticity treatment in residual error models directly impacts the model calibration and prediction uncertainty estimation. This study compares three methods to deal with the heteroscedasticity, including the explicit linear modeling (LM) method and nonlinear modeling (NL) method using hyperbolic tangent function, as well as the implicit Box-Cox transformation (BC). Then a combined approach (CA) combining the advantages of both LM and BC methods has been proposed. In conjunction with the first order autoregressive model and the skew exponential power (SEP) distribution, four residual error models are generated, namely LM-SEP, NL-SEP, BC-SEP and CA-SEP, and their corresponding likelihood functions are applied to the Variable Infiltration Capacity (VIC) hydrologic model over the Huaihe River basin, China. Results show that the LM-SEP yields the poorest streamflow predictions with the widest uncertainty band and unrealistic negative flows. The NL and BC methods can better deal with the heteroscedasticity and hence their corresponding predictive performances are improved, yet the negative flows cannot be avoided. The CA-SEP produces the most accurate predictions with the highest reliability and effectively avoids the negative flows, because the CA approach is capable of addressing the complicated heteroscedasticity over the study basin.
Bumps in river profiles: uncertainty assessment and smoothing using quantile regression techniques
NASA Astrophysics Data System (ADS)
Schwanghart, Wolfgang; Scherler, Dirk
2017-12-01
The analysis of longitudinal river profiles is an important tool for studying landscape evolution. However, characterizing river profiles based on digital elevation models (DEMs) suffers from errors and artifacts that particularly prevail along valley bottoms. The aim of this study is to characterize uncertainties that arise from the analysis of river profiles derived from different, near-globally available DEMs. We devised new algorithms - quantile carving and the CRS algorithm - that rely on quantile regression to enable hydrological correction and the uncertainty quantification of river profiles. We find that globally available DEMs commonly overestimate river elevations in steep topography. The distributions of elevation errors become increasingly wider and right skewed if adjacent hillslope gradients are steep. Our analysis indicates that the AW3D DEM has the highest precision and lowest bias for the analysis of river profiles in mountainous topography. The new 12 m resolution TanDEM-X DEM has a very low precision, most likely due to the combined effect of steep valley walls and the presence of water surfaces in valley bottoms. Compared to the conventional approaches of carving and filling, we find that our new approach is able to reduce the elevation bias and errors in longitudinal river profiles.
A proof for Rhiel's range estimator of the coefficient of variation for skewed distributions.
Rhiel, G Steven
2007-02-01
In this research study is proof that the coefficient of variation (CV(high-low)) calculated from the highest and lowest values in a set of data is applicable to specific skewed distributions with varying means and standard deviations. Earlier Rhiel provided values for d(n), the standardized mean range, and a(n), an adjustment for bias in the range estimator of micro. These values are used in estimating the coefficient of variation from the range for skewed distributions. The d(n) and an values were specified for specific skewed distributions with a fixed mean and standard deviation. In this proof it is shown that the d(n) and an values are applicable for the specific skewed distributions when the mean and standard deviation can take on differing values. This will give the researcher confidence in using this statistic for skewed distributions regardless of the mean and standard deviation.
Ab initio correlated calculations of rare-gas dimer quadrupoles
DOE Office of Scientific and Technical Information (OSTI.GOV)
Donchev, Alexander G.
2007-10-15
This paper reports ab initio calculations of rare gas (RG=Kr, Ar, Ne, and He) dimer quadrupoles at the second order of Moeller-Plesset perturbation theory (MP2). The study reveals the crucial role of the dispersion contribution to the RG{sub 2} quadrupole in the neighborhood of the equilibrium dimer separation. The magnitude of the dispersion quadrupole is found to be much larger than that predicted by the approximate model of Hunt. As a result, the total MP2 quadrupole moment is significantly smaller than was assumed in virtually all previous related studies. An analytical model for the distance dependence of the RG{sub 2}more » quadrupole is proposed. The model is based on the effective-electron approach of Jansen, but replaces the original Gaussian approximation to the electron density in an RG atom by an exponential one. The role of the nonadditive contribution in RG{sub 3} quadrupoles is discussed.« less
SEPTUM MAGNET DESIGN FOR THE APS-U
DOE Office of Scientific and Technical Information (OSTI.GOV)
Abliz, M.; Jaski, M.; Xiao, A.
2017-06-25
The Advanced Photon Source is in the process of upgrading its storage ring from a double-bend to a multi-bend lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U to keep a constant beam current and to enable a small dynamic aperture. A septum magnet with a minimum thickness of 2 mm and an injection field of 1.06 T has been designed, delivering the required total deflecting angle is 89 mrad with a ring energy of 6 GeV. The stored beam chamber has an 8 mm x 6 mm super-ellipsoidal aperture. Themore » magnet is straight; however, it is tilted in yaw, roll, and pitch from the stored beam chamber to meet the on axis swap out injection requirements for the APS-U lattice. In order to minimize the leakage field inside the stored beam chamber, four different techniques were utilized in the design. As a result, the horizontal deflecting angle of the stored beam was held to only 5 µrad, and the integrated skew quadrupole inside the stored beam chamber was held to 0.09 T. The detailed techniques that were applied to the design, field multipoles, and resulting trajectories of the injected and stored beams are reported.« less
Robust functional statistics applied to Probability Density Function shape screening of sEMG data.
Boudaoud, S; Rix, H; Al Harrach, M; Marin, F
2014-01-01
Recent studies pointed out possible shape modifications of the Probability Density Function (PDF) of surface electromyographical (sEMG) data according to several contexts like fatigue and muscle force increase. Following this idea, criteria have been proposed to monitor these shape modifications mainly using High Order Statistics (HOS) parameters like skewness and kurtosis. In experimental conditions, these parameters are confronted with small sample size in the estimation process. This small sample size induces errors in the estimated HOS parameters restraining real-time and precise sEMG PDF shape monitoring. Recently, a functional formalism, the Core Shape Model (CSM), has been used to analyse shape modifications of PDF curves. In this work, taking inspiration from CSM method, robust functional statistics are proposed to emulate both skewness and kurtosis behaviors. These functional statistics combine both kernel density estimation and PDF shape distances to evaluate shape modifications even in presence of small sample size. Then, the proposed statistics are tested, using Monte Carlo simulations, on both normal and Log-normal PDFs that mimic observed sEMG PDF shape behavior during muscle contraction. According to the obtained results, the functional statistics seem to be more robust than HOS parameters to small sample size effect and more accurate in sEMG PDF shape screening applications.
NASA Astrophysics Data System (ADS)
Chen, Wei; Zhang, Junfeng; Gao, Mingyi; Shen, Gangxiang
2018-03-01
High-order modulation signals are suited for high-capacity communication systems because of their high spectral efficiency, but they are more vulnerable to various impairments. For the signals that experience degradation, when symbol points overlap on the constellation diagram, the original linear decision boundary cannot be used to distinguish the classification of symbol. Therefore, it is advantageous to create an optimum symbol decision boundary for the degraded signals. In this work, we experimentally demonstrated the 64-quadrature-amplitude modulation (64-QAM) coherent optical communication system using support-vector machine (SVM) decision boundary algorithm to create the optimum symbol decision boundary for improving the system performance. We investigated the influence of various impairments on the 64-QAM coherent optical communication systems, such as the impairments caused by modulator nonlinearity, phase skew between in-phase (I) arm and quadrature-phase (Q) arm of the modulator, fiber Kerr nonlinearity and amplified spontaneous emission (ASE) noise. We measured the bit-error-ratio (BER) performance of 75-Gb/s 64-QAM signals in the back-to-back and 50-km transmission. By using SVM to optimize symbol decision boundary, the impairments caused by I/Q phase skew of the modulator, fiber Kerr nonlinearity and ASE noise are greatly mitigated.
NASA Astrophysics Data System (ADS)
Chen, Tsing-Chang; Yen, Ming-Cheng; Wu, Kuang-Der; Ng, Thomas
1992-08-01
The time evolution of the Indian monsoon is closely related to locations of the northward migrating monsoon troughs and ridges which can be well depicted with the 30 60day filtered 850-mb streamfunction. Thus, long-range forecasts of the large-scale low-level monsoon can be obtained from those of the filtered 850-mb streamfunction. These long-range forecasts were made in this study in terms of the Auto Regressive (AR) Moving-Average process. The historical series of the AR model were constructed with the 30 60day filtered 850-mb streamfunction [˜ψ (850mb)] time series of 4months. However, the phase of the last low-frequency cycle in the ˜ψ (850mb) time series can be skewed by the bandpass filtering. To reduce this phase skewness, a simple scheme is introduced. With this phase modification of the filtered 850-mb streamfunction, we performed the pilot forecast experiments of three summers with the AR forecast process. The forecast errors in the positions of the northward propagating monsoon troughs and ridges at Day 20 are generally within the range of 1
2days behind the observed, except in some extreme cases.
A framework for WRF to WRF-IBM grid nesting to enable multiscale simulations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wiersema, David John; Lundquist, Katherine A.; Chow, Fotini Katapodes
With advances in computational power, mesoscale models, such as the Weather Research and Forecasting (WRF) model, are often pushed to higher resolutions. As the model’s horizontal resolution is refined, the maximum resolved terrain slope will increase. Because WRF uses a terrain-following coordinate, this increase in resolved terrain slopes introduces additional grid skewness. At high resolutions and over complex terrain, this grid skewness can introduce large numerical errors that require methods, such as the immersed boundary method, to keep the model accurate and stable. Our implementation of the immersed boundary method in the WRF model, WRF-IBM, has proven effective at microscalemore » simulations over complex terrain. WRF-IBM uses a non-conforming grid that extends beneath the model’s terrain. Boundary conditions at the immersed boundary, the terrain, are enforced by introducing a body force term to the governing equations at points directly beneath the immersed boundary. Nesting between a WRF parent grid and a WRF-IBM child grid requires a new framework for initialization and forcing of the child WRF-IBM grid. This framework will enable concurrent multi-scale simulations within the WRF model, improving the accuracy of high-resolution simulations and enabling simulations across a wide range of scales.« less
Non-Gaussian Distribution of DNA Barcode Extension In Nanochannels Using High-throughput Imaging
NASA Astrophysics Data System (ADS)
Sheats, Julian; Reinhart, Wesley; Reifenberger, Jeff; Gupta, Damini; Muralidhar, Abhiram; Cao, Han; Dorfman, Kevin
2015-03-01
We present experimental data for the extension of internal segments of highly confined DNA using a high-throughput experimental setup. Barcode-labeled E. coli genomic DNA molecules were imaged at a high areal density in square nanochannels with sizes ranging from 40 nm to 51 nm in width. Over 25,000 molecules were used to obtain more than 1,000,000 measurements for genomic distances between 2,500 bp and 100,000 bp. The distribution of extensions has positive excess kurtosis and is skew left due to weak backfolding in the channel. As a result, the two Odijk theories for the chain extension and variance bracket the experimental data. We compared to predictions of a harmonic approximation for the confinement free energy and show that it produces a substantial error in the variance. These results suggest an inherent error associated with any statistical analysis of barcoded DNA that relies on harmonic models for chain extension. Present address: Department of Chemical and Biological Engineering, Princeton University.
Aircraft to aircraft intercomparison during SEMAPHORE
NASA Astrophysics Data System (ADS)
Lambert, Dominique; Durand, Pierre
1998-10-01
During the Structure des Echanges Mer-Atmosphère, Propriétés des Hétérogénéités Océaniques: Recherche Expérimentale (SEMAPHORE) experiment, performed in the Azores region in 1993, two French research aircraft were simultaneously used for in situ measurements in the atmospheric boundary layer. We present the results obtained from one intercomparison flight between the two aircraft. The mean parameters generally agree well, although the temperature has to be slightly shifted in order to be in agreement for the two aircraft. A detailed comparison of the turbulence parameters revealed no bias. The agreement is good for variances and is satisfactory for fluxes and skewness. A thorough study of the errors involved in flux computation revealed that the greatest accuracy is obtained for latent heat flux. Errors in sensible heat flux are considerably greater, and the worst results are obtained for momentum flux. The latter parameter, however, is more accurate than expected from previous parameterizations.
Uclés, S; Lozano, A; Sosa, A; Parrilla Vázquez, P; Valverde, A; Fernández-Alba, A R
2017-11-01
Gas and liquid chromatography coupled to triple quadrupole tandem mass spectrometry are currently the most powerful tools employed for the routine analysis of pesticide residues in food control laboratories. However, whatever the multiresidue extraction method, there will be a residual matrix effect making it difficult to identify/quantify some specific compounds in certain cases. Two main effects stand out: (i) co-elution with isobaric matrix interferents, which can be a major drawback for unequivocal identification, and therefore false negative detections, and (ii) signal suppression/enhancement, commonly called the "matrix effect", which may cause serious problems including inaccurate quantitation, low analyte detectability and increased method uncertainty. The aim of this analytical study is to provide a framework for evaluating the maximum expected errors associated with the matrix effects. The worst-case study contrived to give an estimation of the extreme errors caused by matrix effects when extraction/determination protocols are applied in routine multiresidue analysis. Twenty-five different blank matrices extracted with the four most common extraction methods used in routine analysis (citrate QuEChERS with/without PSA clean-up, ethyl acetate and the Dutch mini-Luke "NL" methods) were evaluated by both GC-QqQ-MS/MS and LC-QqQ-MS/MS. The results showed that the presence of matrix compounds with isobaric transitions to target pesticides was higher in GC than under LC in the experimental conditions tested. In a second study, the number of "potential" false negatives was evaluated. For that, ten matrices with higher percentages of natural interfering components were checked. Additionally, the results showed that for more than 90% of the cases, pesticide quantification was not affected by matrix-matched standard calibration when an interferent was kept constant along the calibration curve. The error in quantification depended on the concentration level. In a third study, the "matrix effect" was evaluated for each commodity/extraction method. Results showed 44% of cases with suppression/enhancement for LC and 93% of cases with enhancement for GC. Copyright © 2017 Elsevier B.V. All rights reserved.
A preliminary design of the collinear dielectric wakefield accelerator
NASA Astrophysics Data System (ADS)
Zholents, A.; Gai, W.; Doran, S.; Lindberg, R.; Power, J. G.; Strelnikov, N.; Sun, Y.; Trakhtenberg, E.; Vasserman, I.; Jing, C.; Kanareykin, A.; Li, Y.; Gao, Q.; Shchegolkov, D. Y.; Simakov, E. I.
2016-09-01
A preliminary design of the multi-meter long collinear dielectric wakefield accelerator that achieves a highly efficient transfer of the drive bunch energy to the wakefields and to the witness bunch is considered. It is made from 0.5 m long accelerator modules containing a vacuum chamber with dielectric-lined walls, a quadrupole wiggler, an rf coupler, and BPM assembly. The single bunch breakup instability is a major limiting factor for accelerator efficiency, and the BNS damping is applied to obtain the stable multi-meter long propagation of a drive bunch. Numerical simulations using a 6D particle tracking computer code are performed and tolerances to various errors are defined.
Investigation of beam self-polarization in the future e + e - circular collider
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gianfelice-Wendt, E.
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e +e - Future Circular Collider (FCC-e +e -) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. As a result, preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e +e - ring are presented.
Investigation of beam self-polarization in the future e + e - circular collider
Gianfelice-Wendt, E.
2016-10-24
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e +e - Future Circular Collider (FCC-e +e -) for Z and WW physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. As a result, preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e +e - ring are presented.
Investigation of beam self-polarization in the future e+e- circular collider
NASA Astrophysics Data System (ADS)
Gianfelice-Wendt, E.
2016-10-01
The use of resonant depolarization has been suggested for precise beam energy measurements (better than 100 keV) in the e+e- Future Circular Collider (FCC-e+e-) for Z and W W physics at 45 and 80 GeV beam energy respectively. Longitudinal beam polarization would benefit the Z peak physics program; however it is not essential and therefore it will be not investigated here. In this paper the possibility of self-polarized leptons is considered. Preliminary results of simulations in presence of quadrupole misalignments and beam position monitors (BPMs) errors for a simplified FCC-e+e- ring are presented.
Comparing interval estimates for small sample ordinal CFA models
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research. PMID:26579002
NASA Astrophysics Data System (ADS)
David, McInerney; Mark, Thyer; Dmitri, Kavetski; George, Kuczera
2017-04-01
This study provides guidance to hydrological researchers which enables them to provide probabilistic predictions of daily streamflow with the best reliability and precision for different catchment types (e.g. high/low degree of ephemerality). Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. It is commonly known that hydrological model residual errors are heteroscedastic, i.e. there is a pattern of larger errors in higher streamflow predictions. Although multiple approaches exist for representing this heteroscedasticity, few studies have undertaken a comprehensive evaluation and comparison of these approaches. This study fills this research gap by evaluating 8 common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter, lambda) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and USA, and two lumped hydrological models. We find the choice of heteroscedastic error modelling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with lambda of 0.2 and 0.5, and the log scheme (lambda=0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
NASA Astrophysics Data System (ADS)
McInerney, David; Thyer, Mark; Kavetski, Dmitri; Lerat, Julien; Kuczera, George
2017-03-01
Reliable and precise probabilistic prediction of daily catchment-scale streamflow requires statistical characterization of residual errors of hydrological models. This study focuses on approaches for representing error heteroscedasticity with respect to simulated streamflow, i.e., the pattern of larger errors in higher streamflow predictions. We evaluate eight common residual error schemes, including standard and weighted least squares, the Box-Cox transformation (with fixed and calibrated power parameter λ) and the log-sinh transformation. Case studies include 17 perennial and 6 ephemeral catchments in Australia and the United States, and two lumped hydrological models. Performance is quantified using predictive reliability, precision, and volumetric bias metrics. We find the choice of heteroscedastic error modeling approach significantly impacts on predictive performance, though no single scheme simultaneously optimizes all performance metrics. The set of Pareto optimal schemes, reflecting performance trade-offs, comprises Box-Cox schemes with λ of 0.2 and 0.5, and the log scheme (λ = 0, perennial catchments only). These schemes significantly outperform even the average-performing remaining schemes (e.g., across ephemeral catchments, median precision tightens from 105% to 40% of observed streamflow, and median biases decrease from 25% to 4%). Theoretical interpretations of empirical results highlight the importance of capturing the skew/kurtosis of raw residuals and reproducing zero flows. Paradoxically, calibration of λ is often counterproductive: in perennial catchments, it tends to overfit low flows at the expense of abysmal precision in high flows. The log-sinh transformation is dominated by the simpler Pareto optimal schemes listed above. Recommendations for researchers and practitioners seeking robust residual error schemes for practical work are provided.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human-robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human-robot interaction experiments. For that, we analyzed 201 videos of five human-robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human-robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies.
Comparing interval estimates for small sample ordinal CFA models.
Natesan, Prathiba
2015-01-01
Robust maximum likelihood (RML) and asymptotically generalized least squares (AGLS) methods have been recommended for fitting ordinal structural equation models. Studies show that some of these methods underestimate standard errors. However, these studies have not investigated the coverage and bias of interval estimates. An estimate with a reasonable standard error could still be severely biased. This can only be known by systematically investigating the interval estimates. The present study compares Bayesian, RML, and AGLS interval estimates of factor correlations in ordinal confirmatory factor analysis models (CFA) for small sample data. Six sample sizes, 3 factor correlations, and 2 factor score distributions (multivariate normal and multivariate mildly skewed) were studied. Two Bayesian prior specifications, informative and relatively less informative were studied. Undercoverage of confidence intervals and underestimation of standard errors was common in non-Bayesian methods. Underestimated standard errors may lead to inflated Type-I error rates. Non-Bayesian intervals were more positive biased than negatively biased, that is, most intervals that did not contain the true value were greater than the true value. Some non-Bayesian methods had non-converging and inadmissible solutions for small samples and non-normal data. Bayesian empirical standard error estimates for informative and relatively less informative priors were closer to the average standard errors of the estimates. The coverage of Bayesian credibility intervals was closer to what was expected with overcoverage in a few cases. Although some Bayesian credibility intervals were wider, they reflected the nature of statistical uncertainty that comes with the data (e.g., small sample). Bayesian point estimates were also more accurate than non-Bayesian estimates. The results illustrate the importance of analyzing coverage and bias of interval estimates, and how ignoring interval estimates can be misleading. Therefore, editors and policymakers should continue to emphasize the inclusion of interval estimates in research.
Giuliani, Manuel; Mirnig, Nicole; Stollnberger, Gerald; Stadler, Susanne; Buchner, Roland; Tscheligi, Manfred
2015-01-01
Human–robot interactions are often affected by error situations that are caused by either the robot or the human. Therefore, robots would profit from the ability to recognize when error situations occur. We investigated the verbal and non-verbal social signals that humans show when error situations occur in human–robot interaction experiments. For that, we analyzed 201 videos of five human–robot interaction user studies with varying tasks from four independent projects. The analysis shows that there are two types of error situations: social norm violations and technical failures. Social norm violations are situations in which the robot does not adhere to the underlying social script of the interaction. Technical failures are caused by technical shortcomings of the robot. The results of the video analysis show that the study participants use many head movements and very few gestures, but they often smile, when in an error situation with the robot. Another result is that the participants sometimes stop moving at the beginning of error situations. We also found that the participants talked more in the case of social norm violations and less during technical failures. Finally, the participants use fewer non-verbal social signals (for example smiling, nodding, and head shaking), when they are interacting with the robot alone and no experimenter or other human is present. The results suggest that participants do not see the robot as a social interaction partner with comparable communication skills. Our findings have implications for builders and evaluators of human–robot interaction systems. The builders need to consider including modules for recognition and classification of head movements to the robot input channels. The evaluators need to make sure that the presence of an experimenter does not skew the results of their user studies. PMID:26217266
McGregor, Heather R.; Pun, Henry C. H.; Buckingham, Gavin; Gribble, Paul L.
2016-01-01
The human sensorimotor system is routinely capable of making accurate predictions about an object's weight, which allows for energetically efficient lifts and prevents objects from being dropped. Often, however, poor predictions arise when the weight of an object can vary and sensory cues about object weight are sparse (e.g., picking up an opaque water bottle). The question arises, what strategies does the sensorimotor system use to make weight predictions when one is dealing with an object whose weight may vary? For example, does the sensorimotor system use a strategy that minimizes prediction error (minimal squared error) or one that selects the weight that is most likely to be correct (maximum a posteriori)? In this study we dissociated the predictions of these two strategies by having participants lift an object whose weight varied according to a skewed probability distribution. We found, using a small range of weight uncertainty, that four indexes of sensorimotor prediction (grip force rate, grip force, load force rate, and load force) were consistent with a feedforward strategy that minimizes the square of prediction errors. These findings match research in the visuomotor system, suggesting parallels in underlying processes. We interpret our findings within a Bayesian framework and discuss the potential benefits of using a minimal squared error strategy. NEW & NOTEWORTHY Using a novel experimental model of object lifting, we tested whether the sensorimotor system models the weight of objects by minimizing lifting errors or by selecting the statistically most likely weight. We found that the sensorimotor system minimizes the square of prediction errors for object lifting. This parallels the results of studies that investigated visually guided reaching, suggesting an overlap in the underlying mechanisms between tasks that involve different sensory systems. PMID:27760821
Weak interactions, omnivory and emergent food-web properties.
Emmerson, Mark; Yearsley, Jon M
2004-02-22
Empirical studies have shown that, in real ecosystems, species-interaction strengths are generally skewed in their distribution towards weak interactions. Some theoretical work also suggests that weak interactions, especially in omnivorous links, are important for the local stability of a community at equilibrium. However, the majority of theoretical studies use uniform distributions of interaction strengths to generate artificial communities for study. We investigate the effects of the underlying interaction-strength distribution upon the return time, permanence and feasibility of simple Lotka-Volterra equilibrium communities. We show that a skew towards weak interactions promotes local and global stability only when omnivory is present. It is found that skewed interaction strengths are an emergent property of stable omnivorous communities, and that this skew towards weak interactions creates a dynamic constraint maintaining omnivory. Omnivory is more likely to occur when omnivorous interactions are skewed towards weak interactions. However, a skew towards weak interactions increases the return time to equilibrium, delays the recovery of ecosystems and hence decreases the stability of a community. When no skew is imposed, the set of stable omnivorous communities shows an emergent distribution of skewed interaction strengths. Our results apply to both local and global concepts of stability and are robust to the definition of a feasible community. These results are discussed in the light of empirical data and other theoretical studies, in conjunction with their broader implications for community assembly.
Muers, Mary R; Sharpe, Jacqueline A; Garrick, David; Sloane-Stanley, Jacqueline; Nolan, Patrick M; Hacker, Terry; Wood, William G; Higgs, Douglas R; Gibbons, Richard J
2007-06-01
Extreme skewing of X-chromosome inactivation (XCI) is rare in the normal female population but is observed frequently in carriers of some X-linked mutations. Recently, it has been shown that various forms of X-linked mental retardation (XLMR) have a strong association with skewed XCI in female carriers, but the mechanisms underlying this skewing are unknown. ATR-X syndrome, caused by mutations in a ubiquitously expressed, chromatin-associated protein, provides a clear example of XLMR in which phenotypically normal female carriers virtually all have highly skewed XCI biased against the X chromosome that harbors the mutant allele. Here, we have used a mouse model to understand the processes causing skewed XCI. In female mice heterozygous for a null Atrx allele, we found that XCI is balanced early in embryogenesis but becomes skewed over the course of development, because of selection favoring cells expressing the wild-type Atrx allele. Unexpectedly, selection does not appear to be the result of general cellular-viability defects in Atrx-deficient cells, since it is restricted to specific stages of development and is not ongoing throughout the life of the animal. Instead, there is evidence that selection results from independent tissue-specific effects. This illustrates an important mechanism by which skewed XCI may occur in carriers of XLMR and provides insight into the normal role of ATRX in regulating cell fate.
Sociality, mating system and reproductive skew in marmots: evidence and hypotheses.
Allainé
2000-10-05
Marmot species exhibit a great diversity of social structure, mating systems and reproductive skew. In particular, among the social species (i.e. all except Marmota monax), the yellow-bellied marmot appears quite different from the others. The yellow-bellied marmot is primarily polygynous with an intermediate level of sociality and low reproductive skew among females. In contrast, all other social marmot species are mainly monogamous, highly social and with marked reproductive skew among females. To understand the evolution of this difference in reproductive skew, I examined four possible explanations identified from reproductive skew theory. From the literature, I then reviewed evidence to investigate if marmot species differ in: (1) the ability of dominants to control the reproduction of subordinates; (2) the degree of relatedness between group members; (3) the benefit for subordinates of remaining in the social group; and (4) the benefit for dominants of retaining subordinates. I found that the optimal skew hypothesis may apply for both sets of species. I suggest that yellow-bellied marmot females may benefit from retaining subordinate females and in return have to concede them reproduction. On the contrary, monogamous marmot species may gain by suppressing the reproduction of subordinate females to maximise the efficiency of social thermoregulation, even at the risk of departure of subordinate females from the family group. Finally, I discuss scenarios for the simultaneous evolution of sociality, monogamy and reproductive skew in marmots.
Comparison of conventional and novel quadrupole drift tube magnets inspired by Klaus Halbach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Feinberg, B.
1995-02-01
Quadrupole drift tube magnets for a heavy-ion linac provide a demanding application of magnet technology. A comparison is made of three different solutions to the problem of providing an adjustable high-field-strength quadrupole magnet in a small volume. A conventional tape-wound electromagnet quadrupole magnet (conventional) is compared with an adjustable permanent-magnet/iron quadrupole magnet (hybrid) and a laced permanent-magnet/iron/electromagnet (laced). Data is presented from magnets constructed for the SuperHILAC heavy-ion linear accelerator, and conclusions are drawn for various applications.
Earthquake fragility assessment of curved and skewed bridges in Mountain West region.
DOT National Transportation Integrated Search
2016-09-01
Reinforced concrete (RC) bridges with both skew and curvature are common in areas with : complex terrains. Skewed and/or curved bridges were found in existing studies to exhibit more : complicated seismic performance than straight bridges, however th...
DOT National Transportation Integrated Search
2016-09-01
the ISSUE : the RESEARCH : Earthquake Fragility : Assessment of Curved : and Skewed Bridges in : Mountain West Region : Reinforced concrete bridges with both skew and curvature are common in areas with complex terrains. : These bridges are irregular ...
Mapping of quantitative trait loci using the skew-normal distribution.
Fernandes, Elisabete; Pacheco, António; Penha-Gonçalves, Carlos
2007-11-01
In standard interval mapping (IM) of quantitative trait loci (QTL), the QTL effect is described by a normal mixture model. When this assumption of normality is violated, the most commonly adopted strategy is to use the previous model after data transformation. However, an appropriate transformation may not exist or may be difficult to find. Also this approach can raise interpretation issues. An interesting alternative is to consider a skew-normal mixture model in standard IM, and the resulting method is here denoted as skew-normal IM. This flexible model that includes the usual symmetric normal distribution as a special case is important, allowing continuous variation from normality to non-normality. In this paper we briefly introduce the main peculiarities of the skew-normal distribution. The maximum likelihood estimates of parameters of the skew-normal distribution are obtained by the expectation-maximization (EM) algorithm. The proposed model is illustrated with real data from an intercross experiment that shows a significant departure from the normality assumption. The performance of the skew-normal IM is assessed via stochastic simulation. The results indicate that the skew-normal IM has higher power for QTL detection and better precision of QTL location as compared to standard IM and nonparametric IM.
Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting
Williamson, Rodney L.; Zanner, Frank J.; Grose, Stephen M.
1998-01-01
The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord [Eau Claire, WI; Cornett, Frank N [Chippewa Falls, WI
2008-10-07
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
System and method for adaptively deskewing parallel data signals relative to a clock
Jenkins, Philip Nord [Redwood Shores, CA; Cornett, Frank N [Chippewa Falls, WI
2011-10-04
A system and method of reducing skew between a plurality of signals transmitted with a transmit clock is described. Skew is detected between the received transmit clock and each of received data signals. Delay is added to the clock or to one or more of the plurality of data signals to compensate for the detected skew. The delay added to each of the plurality of delayed signals is updated to adapt to changes in detected skew.
Currie, L A
2001-07-01
Three general classes of skewed data distributions have been encountered in research on background radiation, chemical and radiochemical blanks, and low levels of 85Kr and 14C in the atmosphere and the cryosphere. The first class of skewed data can be considered to be theoretically, or fundamentally skewed. It is typified by the exponential distribution of inter-arrival times for nuclear counting events for a Poisson process. As part of a study of the nature of low-level (anti-coincidence) Geiger-Muller counter background radiation, tests were performed on the Poisson distribution of counts, the uniform distribution of arrival times, and the exponential distribution of inter-arrival times. The real laboratory system, of course, failed the (inter-arrival time) test--for very interesting reasons, linked to the physics of the measurement process. The second, computationally skewed, class relates to skewness induced by non-linear transformations. It is illustrated by non-linear concentration estimates from inverse calibration, and bivariate blank corrections for low-level 14C-12C aerosol data that led to highly asymmetric uncertainty intervals for the biomass carbon contribution to urban "soot". The third, environmentally, skewed, data class relates to a universal problem for the detection of excursions above blank or baseline levels: namely, the widespread occurrence of ab-normal distributions of environmental and laboratory blanks. This is illustrated by the search for fundamental factors that lurk behind skewed frequency distributions of sulfur laboratory blanks and 85Kr environmental baselines, and the application of robust statistical procedures for reliable detection decisions in the face of skewed isotopic carbon procedural blanks with few degrees of freedom.
Asymmetric skew Bessel processes and their applications to finance
NASA Astrophysics Data System (ADS)
Decamps, Marc; Goovaerts, Marc; Schoutens, Wim
2006-02-01
In this paper, we extend the Harrison and Shepp's construction of the skew Brownian motion (1981) and we obtain a diffusion similar to the two-dimensional Bessel process with speed and scale densities discontinuous at one point. Natural generalizations to multi-dimensional and fractional order Bessel processes are then discussed as well as invariance properties. We call this family of diffusions asymmetric skew Bessel processes in opposition to skew Bessel processes as defined in Barlow et al. [On Walsh's Brownian motions, Seminaire de Probabilities XXIII, Lecture Notes in Mathematics, vol. 1372, Springer, Berlin, New York, 1989, pp. 275-293]. We present factorizations involving (asymmetric skew) Bessel processes with random time. Finally, applications to the valuation of perpetuities and Asian options are proposed.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Chutjian, Ara (Inventor); Brennen, Reid A. (Inventor); Hecht, Michael (Inventor); Wiberg, Dean (Inventor); Orient, Otto (Inventor)
2001-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and alignment for use in a final quadrupole mass spectrometer device.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Fuerstenau, Stephen D. (Inventor); Yee, Karl Y. (Inventor); Chutjian, Ara (Inventor); Orient, Otto J. (Inventor); Rice, John T. (Inventor)
2002-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter, or pole array, for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and alignment for use in a final quadrupole mass spectrometer device.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Chutjian, Ara (Inventor); Rice, John T. (Inventor); Fuerstenau, Stephen D. (Inventor); Orient, Otto J. (Inventor); Yee, Karl Y. (Inventor)
2000-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter, or pole array, for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and alignment for use in a final quadrupole mass spectrometer device.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Yee, Karl Y. (Inventor); Fuerstenau, Stephen D. (Inventor); Orient, Otto J. (Inventor); Rice, John T. (Inventor); Chutjian, Ara (Inventor)
2001-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter, or pole array, for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and alignment for use in a final quadrupole mass spectrometer device.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Hecht, Michael (Inventor); Wiberg, Dean (Inventor); Orient, Otto (Inventor); Brennen, Reid A. (Inventor); Chutjian, Ara (Inventor)
2001-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and aligrnent for use in a final quadrupole mass spectrometer device.
Miniature micromachined quadrupole mass spectrometer array and method of making the same
NASA Technical Reports Server (NTRS)
Orient, Otto (Inventor); Wiberg, Dean (Inventor); Brennen, Reid A. (Inventor); Hecht, Michael (Inventor); Chutjian, Ara (Inventor)
2000-01-01
The present invention provides a quadrupole mass spectrometer and an ion filter for use in the quadrupole mass spectrometer. The ion filter includes a thin patterned layer including a two-dimensional array of poles forming one or more quadrupoles. The patterned layer design permits the use of very short poles and with a very dense spacing of the poles, so that the ion filter may be made very small. Also provided is a method for making the ion filter and the quadrupole mass spectrometer. The method involves forming the patterned layer of the ion filter in such a way that as the poles of the patterned layer are formed, they have the relative positioning and alignment for use in a final quadrupole mass spectrometer device.
Arc voltage distribution skewness as an indicator of electrode gap during vacuum arc remelting
Williamson, R.L.; Zanner, F.J.; Grose, S.M.
1998-01-13
The electrode gap of a VAR is monitored by determining the skewness of a distribution of gap voltage measurements. A decrease in skewness indicates an increase in gap and may be used to control the gap. 4 figs.
DOT National Transportation Integrated Search
2014-05-01
Different problems in straight skewed steel I-girder bridges are often associated with the methods used for detailing the cross-frames. Use of theoretical terms to describe these detailing methods and absence of complete and simplified design approac...
NASA Astrophysics Data System (ADS)
Sabato, L.; Arpaia, P.; Cianchi, A.; Liccardo, A.; Mostacci, A.; Palumbo, L.; Variola, A.
2018-02-01
In high-brightness LINear ACcelerators (LINACs), electron bunch length can be measured indirectly by a radio frequency deflector (RFD). In this paper, the accuracy loss arising from non-negligible correlations between particle longitudinal positions and the transverse plane (in particular the vertical one) at RFD entrance is analytically assessed. Theoretical predictions are compared with simulation results, obtained by means of ELEctron Generation ANd Tracking (ELEGANT) code, in the case study of the gamma beam system (GBS) at the extreme light infrastructure—nuclear physics (ELI-NP). In particular, the relative error affecting the bunch length measurement, for bunches characterized by both energy chirp and fixed correlation coefficients between longitudinal particle positions and the vertical plane, is reported. Moreover, the relative error versus the correlation coefficients is shown for fixed RFD phase 0 rad and π rad. The relationship between relative error and correlations factors can help the decision of using the bunch length measurement technique with one or two vertical spot size measurements in order to cancel the correlations contribution. In the case of the GBS electron LINAC, the misalignment of one of the quadrupoles before the RFD between -2 mm and 2 mm leads to a relative error less than 5%. The misalignment of the first C-band accelerating section between -2 mm and 2 mm could lead to a relative error up to 10%.
INJECTION OPTICS FOR THE JLEIC ION COLLIDER RING
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morozov, Vasiliy; Derbenev, Yaroslav; Lin, Fanglei
2016-05-01
The Jefferson Lab Electron-Ion Collider (JLEIC) will accelerate protons and ions from 8 GeV to 100 GeV. A very low beta function at the Interaction Point (IP) is needed to achieve the required luminosity. One consequence of the low beta optics is that the beta function in the final focusing (FF) quadrupoles is extremely high. This leads to a large beam size in these magnets as well as strong sensitivity to errors which limits the dynamic aperture. These effects are stronger at injection energy where the beam size is maximum, and therefore very large aperture FF magnets are required tomore » allow a large dynamic aperture. A standard solution is a relaxed injection optics with IP beta function large enough to provide a reasonable FF aperture. This also reduces the effects of FF errors resulting in a larger dynamic aperture at injection. We describe the ion ring injection optics design as well as a beta-squeeze transition from the injection to collision optics.« less
NASA Astrophysics Data System (ADS)
Kettler, David T.; Prindle, Duncan J.; Trainor, Thomas A.
2015-06-01
Previous measurements of a quadrupole component of azimuth correlations denoted by symbol v2 have been interpreted to represent elliptic flow, a hydrodynamic phenomenon conjectured to play a major role in noncentral nucleus-nucleus collisions. v2 measurements provide the main support for conclusions that a "perfect liquid" is formed in heavy-ion collisions at the Relativistic Heavy Ion Collider. However, conventional v2 methods based on one-dimensional (1D) azimuth correlations give inconsistent results and may include a jet contribution. In some cases the data trends appear to be inconsistent with hydrodynamic interpretations. In this study we distinguish several components of 2D angular correlations and isolate a nonjet (NJ) azimuth quadrupole denoted by v2{2D} . We establish systematic variations of the NJ quadrupole on yt, centrality, and collision energy. We adopt transverse-rapidity yt as both a velocity measure and a logarithmic alternative to transverse momentum pt. Based on NJ-quadrupole trends, we derive a completely factorized universal parametrization of quantity v2{2D} (yt,b ,√{sN N}) which describes the centrality, yt, and energy dependence. From yt-differential v2(yt) data we isolate a quadrupole spectrum and infer a quadrupole source boost having unexpected properties. NJ quadrupole v2 trends obtained with 2D model fits are remarkably simple. The centrality trend appears to be uncorrelated with a sharp transition in jet-related structure that may indicate rapid change of Au-Au medium properties. The lack of correspondence suggests that the NJ quadrupole may be insensitive to such a medium. Several quadrupole trends have interesting implications for hydro interpretations.
Nuclear quadrupole resonance studies in semi-metallic structures
NASA Technical Reports Server (NTRS)
Murty, A. N.
1974-01-01
Both experimental and theoretical studies are presented on spectrum analysis of nuclear quadrupole resonance of antimony and arsenic tellurides. Numerical solutions for secular equations of the quadrupole interaction energy are also discussed.
Measuring Skewness: A Forgotten Statistic?
ERIC Educational Resources Information Center
Doane, David P.; Seward, Lori E.
2011-01-01
This paper discusses common approaches to presenting the topic of skewness in the classroom, and explains why students need to know how to measure it. Two skewness statistics are examined: the Fisher-Pearson standardized third moment coefficient, and the Pearson 2 coefficient that compares the mean and median. The former is reported in statistical…
Learning a Novel Pattern through Balanced and Skewed Input
ERIC Educational Resources Information Center
McDonough, Kim; Trofimovich, Pavel
2013-01-01
This study compared the effectiveness of balanced and skewed input at facilitating the acquisition of the transitive construction in Esperanto, characterized by the accusative suffix "-n" and variable word order (SVO, OVS). Thai university students (N = 98) listened to 24 sentences under skewed (one noun with high token frequency) or…
NASA Technical Reports Server (NTRS)
Deissler, Robert G.
1990-01-01
The variation of the velocity-derivative skewness of a Navier-Stokes flow as the Reynolds number goes toward zero is calculated numerically. The value of the skewness, which has been somewhat controversial, is shown to become small at low Reynolds numbers.
Investigation of free vibration characteristics for skew multiphase magneto-electro-elastic plate
NASA Astrophysics Data System (ADS)
Kiran, M. C.; Kattimani, S.
2018-04-01
This article presents the investigation of skew multiphase magneto-electro-elastic (MMEE) plate to assess its free vibration characteristics. A finite element (FE) model is formulated considering the different couplings involved via coupled constitutive equations. The transformation matrices are derived to transform local degrees of freedom into the global degrees of freedom for the nodes lying on the skew edges. Effect of different volume fraction (Vf) on the free vibration behavior is explicitly studied. In addition, influence of width to thickness ratio, the aspect ratio, and the stacking arrangement on natural frequencies of skew multiphase MEE plate investigated. Particular attention has been paid to investigate the effect of skew angle on the non-dimensional Eigen frequencies of multiphase MEE plate with simply supported edges.
Skew information in the XY model with staggered Dzyaloshinskii-Moriya interaction
NASA Astrophysics Data System (ADS)
Qiu, Liang; Quan, Dongxiao; Pan, Fei; Liu, Zhi
2017-06-01
We study the performance of the lower bound of skew information in the vicinity of transition point for the anisotropic spin-1/2 XY chain with staggered Dzyaloshinskii-Moriya interaction by use of quantum renormalization-group method. For a fixed value of the Dzyaloshinskii-Moriya interaction, there are two saturated values for the lower bound of skew information corresponding to the spin-fluid and Néel phases, respectively. The scaling exponent of the lower bound of skew information closely relates to the correlation length of the model and the Dzyaloshinskii-Moriya interaction shifts the factorization point. Our results show that the lower bound of skew information can be a good candidate to detect the critical point of XY spin chain with staggered Dzyaloshinskii-Moriya interaction.
Waldinger, Marcel D; Zwinderman, Aeilko H; Olivier, Berend; Schweitzer, Dave H
2008-02-01
The intravaginal ejaculation latency time (IELT) behaves in a skewed manner and needs the appropriate statistics for correct interpretation of treatment results. To explain the rightful use of geometrical mean IELT values and the fold increase of the geometric mean IELT because of the positively skewed IELT distribution. Linking theoretical arguments to the outcome of several selective serotonin reuptake inhibitor and modern antidepressant study results. Geometric mean IELT and fold increase of geometrical mean IELT. Log-transforming each separate IELT measurement of each individual man is the basis for the calculation of the geometric mean IELT. A drug-induced positively skewed IELT distribution necessitates the calculation of the geometric mean IELTs at baseline and during drug treatment. In a positively skewed IELT distribution, the use of the "arithmetic" mean IELT risks an overestimation of the drug-induced ejaculation delay as the mean IELT is always higher than the geometric mean IELT. Strong ejaculation-delaying drugs give rise to a strong positively skewed IELT distribution, whereas weak ejaculation-delaying drugs give rise to (much) less skewed IELT distributions. Ejaculation delay is expressed in fold increase of the geometric mean IELT. Drug-induced ejaculatory performance discloses a positively skewed IELT distribution, requiring the use of the geometric mean IELT and the fold increase of the geometric mean IELT.
Fesharaki, Maryam; Karagiannis, Peter; Tweed, Douglas; Sharpe, James A.; Wong, Agnes M. F.
2016-01-01
Purpose Skew deviation is a vertical strabismus caused by damage to the otolithic–ocular reflex pathway and is associated with abnormal ocular torsion. This study was conducted to determine whether patients with skew deviation show the normal pattern of three-dimensional eye control called Listing’s law, which specifies the eye’s torsional angle as a function of its horizontal and vertical position. Methods Ten patients with skew deviation caused by brain stem or cerebellar lesions and nine normal control subjects were studied. Patients with diplopia and neurologic symptoms less than 1 month in duration were designated as acute (n = 4) and those with longer duration were classified as chronic (n = 10). Serial recordings were made in the four patients with acute skew deviation. With the head immobile, subjects made saccades to a target that moved between straight ahead and eight eccentric positions, while wearing search coils. At each target position, fixation was maintained for 3 seconds before the next saccade. From the eye position data, the plane of best fit, referred to as Listing’s plane, was fitted. Violations of Listing’s law were quantified by computing the “thickness” of this plane, defined as the SD of the distances to the plane from the data points. Results Both the hypertropic and hypotropic eyes in patients with acute skew deviation violated Listing’s and Donders’ laws—that is, the eyes did not show one consistent angle of torsion in any given gaze direction, but rather an abnormally wide range of torsional angles. In contrast, each eye in patients with chronic skew deviation obeyed the laws. However, in chronic skew deviation, Listing’s planes in both eyes had abnormal orientations. Conclusions Patients with acute skew deviation violated Listing’s law, whereas those with chronic skew deviation obeyed it, indicating that despite brain lesions, neural adaptation can restore Listing’s law so that the neural linkage between horizontal, vertical, and torsional eye position remains intact. Violation of Listing’s and Donders’ laws during fixation arises primarily from torsional drifts, indicating that patients with acute skew deviation have unstable torsional gaze holding that is independent of their horizontal–vertical eye positions. PMID:18172094
Kreienkamp, Amelia B.; Liu, Lucy Y.; Minkara, Mona S.; Knepley, Matthew G.; Bardhan, Jaydeep P.; Radhakrishnan, Mala L.
2013-01-01
We analyze and suggest improvements to a recently developed approximate continuum-electrostatic model for proteins. The model, called BIBEE/I (boundary-integral based electrostatics estimation with interpolation), was able to estimate electrostatic solvation free energies to within a mean unsigned error of 4% on a test set of more than 600 proteins—a significant improvement over previous BIBEE models. In this work, we tested the BIBEE/I model for its capability to predict residue-by-residue interactions in protein–protein binding, using the widely studied model system of trypsin and bovine pancreatic trypsin inhibitor (BPTI). Finding that the BIBEE/I model performs surprisingly less well in this task than simpler BIBEE models, we seek to explain this behavior in terms of the models’ differing spectral approximations of the exact boundary-integral operator. Calculations of analytically solvable systems (spheres and tri-axial ellipsoids) suggest two possibilities for improvement. The first is a modified BIBEE/I approach that captures the asymptotic eigenvalue limit correctly, and the second involves the dipole and quadrupole modes for ellipsoidal approximations of protein geometries. Our analysis suggests that fast, rigorous approximate models derived from reduced-basis approximation of boundary-integral equations might reach unprecedented accuracy, if the dipole and quadrupole modes can be captured quickly for general shapes. PMID:24466561
Real-time quantitative analysis of H2, He, O2, and Ar by quadrupole ion trap mass spectrometry.
Ottens, Andrew K; Harrison, W W; Griffin, Timothy P; Helms, William R
2002-09-01
The use of a quadrupole ion trap mass spectrometer (QITMS) for quantitative analysis of hydrogen and helium as well as of other permanent gases is demonstrated. Like commercial instruments, the customized QITMS uses mass selective instability; however, this instrument operates at a greater trapping frequency and without a buffer gas. Thus, a useable mass range from 2 to over 50 daltons (Da) is achieved. The performance of the ion trap is evaluated using part-per-million (ppm) concentrations of hydrogen, helium, oxygen, and argon mixed into a nitrogen gas stream, as outlined by the National Aeronautics and Space Administration (NASA), which is interested in monitoring for cryogenic fuel leaks within the Space Shuttle during launch preparations. When quantitating the four analytes, relative accuracy and precision were better than the NASA-required minimum of 10% error and 5% deviation, respectively. Limits of detection were below the NASA requirement of 25-ppm hydrogen and 100-ppm helium; those for oxygen and argon were within the same order of magnitude as the requirements. These results were achieved at a fast data recording rate, and demonstrate the utility of the QITMS as a real-time quantitative monitoring device for permanent gas analysis. c. 2002 American Society for Mass Spectrometry.
Means and method for the focusing and acceleration of parallel beams of charged particles
Maschke, Alfred W.
1983-07-05
A novel apparatus and method for focussing beams of charged particles comprising planar arrays of electrostatic quadrupoles. The quadrupole arrays may comprise electrodes which are shared by two or more quadrupoles. Such quadrupole arrays are particularly adapted to providing strong focussing forces for high current, high brightness, beams of charged particles, said beams further comprising a plurality of parallel beams, or beamlets, each such beamlet being focussed by one quadrupole of the array. Such arrays may be incorporated in various devices wherein beams of charged particles are accelerated or transported, such as linear accelerators, klystron tubes, beam transport lines, etc.
Noise reduction in negative-ion quadrupole mass spectrometry
Chastagner, P.
1993-04-20
A quadrupole mass spectrometer (QMS) system is described having an ion source, quadrupole mass filter, and ion collector/recorder system. A weak, transverse magnetic field and an electron collector are disposed between the quadrupole and ion collector. When operated in negative ion mode, the ion source produces a beam of primarily negatively-charged particles from a sample, including electrons as well as ions. The beam passes through the quadrupole and enters the magnetic field, where the electrons are deflected away from the beam path to the electron collector. The negative ions pass undeflected to the ion collector where they are detected and recorded as a mass spectrum.
Noise reduction in negative-ion quadrupole mass spectrometry
Chastagner, Philippe
1993-01-01
A quadrupole mass spectrometer (QMS) system having an ion source, quadrupole mass filter, and ion collector/recorder system. A weak, transverse magnetic field and an electron collector are disposed between the quadrupole and ion collector. When operated in negative ion mode, the ion source produces a beam of primarily negatively-charged particles from a sample, including electrons as well as ions. The beam passes through the quadrupole and enters the magnetic field, where the electrons are deflected away from the beam path to the electron collector. The negative ions pass undeflected to the ion collector where they are detected and recorded as a mass spectrum.
NASA Astrophysics Data System (ADS)
Ghotbi, Abdoul R.
2014-09-01
The seismic behavior of skewed bridges has not been well studied compared to straight bridges. Skewed bridges have shown extensive damage, especially due to deck rotation, shear keys failure, abutment unseating and column-bent drift. This research, therefore, aims to study the behavior of skewed and straight highway overpass bridges both with and without taking into account the effects of Soil-Structure Interaction (SSI) due to near-fault ground motions. Due to several sources of uncertainty associated with the ground motions, soil and structure, a probabilistic approach is needed. Thus, a probabilistic methodology similar to the one developed by the Pacific Earthquake Engineering Research Center (PEER) has been utilized to assess the probability of damage due to various levels of shaking using appropriate intensity measures with minimum dispersions. The probabilistic analyses were performed for various bridge configurations and site conditions, including sand ranging from loose to dense and clay ranging from soft to stiff, in order to evaluate the effects. The results proved a considerable susceptibility of skewed bridges to deck rotation and shear keys displacement. It was also found that SSI had a decreasing effect on the damage probability for various demands compared to the fixed-base model without including SSI. However, deck rotation for all types of the soil and also abutment unseating for very loose sand and soft clay showed an increase in damage probability compared to the fixed-base model. The damage probability for various demands has also been found to decrease with an increase of soil strength for both sandy and clayey sites. With respect to the variations in the skew angle, an increase in skew angle has had an increasing effect on the amplitude of the seismic response for various demands. Deck rotation has been very sensitive to the increase in the skew angle; therefore, as the skew angle increased, the deck rotation responded accordingly. Furthermore, abutment unseating showed an increasing trend due to an increase in skew angle for both fixed-base and SSI models.
Wagner, Daniel M.; Krieger, Joshua D.; Veilleux, Andrea G.
2016-08-04
In 2013, the U.S. Geological Survey initiated a study to update regional skew, annual exceedance probability discharges, and regional regression equations used to estimate annual exceedance probability discharges for ungaged locations on streams in the study area with the use of recent geospatial data, new analytical methods, and available annual peak-discharge data through the 2013 water year. An analysis of regional skew using Bayesian weighted least-squares/Bayesian generalized-least squares regression was performed for Arkansas, Louisiana, and parts of Missouri and Oklahoma. The newly developed constant regional skew of -0.17 was used in the computation of annual exceedance probability discharges for 281 streamgages used in the regional regression analysis. Based on analysis of covariance, four flood regions were identified for use in the generation of regional regression models. Thirty-nine basin characteristics were considered as potential explanatory variables, and ordinary least-squares regression techniques were used to determine the optimum combinations of basin characteristics for each of the four regions. Basin characteristics in candidate models were evaluated based on multicollinearity with other basin characteristics (variance inflation factor < 2.5) and statistical significance at the 95-percent confidence level (p ≤ 0.05). Generalized least-squares regression was used to develop the final regression models for each flood region. Average standard errors of prediction of the generalized least-squares models ranged from 32.76 to 59.53 percent, with the largest range in flood region D. Pseudo coefficients of determination of the generalized least-squares models ranged from 90.29 to 97.28 percent, with the largest range also in flood region D. The regional regression equations apply only to locations on streams in Arkansas where annual peak discharges are not substantially affected by regulation, diversion, channelization, backwater, or urbanization. The applicability and accuracy of the regional regression equations depend on the basin characteristics measured for an ungaged location on a stream being within range of those used to develop the equations.
Sample Skewness as a Statistical Measurement of Neuronal Tuning Sharpness
Samonds, Jason M.; Potetz, Brian R.; Lee, Tai Sing
2014-01-01
We propose using the statistical measurement of the sample skewness of the distribution of mean firing rates of a tuning curve to quantify sharpness of tuning. For some features, like binocular disparity, tuning curves are best described by relatively complex and sometimes diverse functions, making it difficult to quantify sharpness with a single function and parameter. Skewness provides a robust nonparametric measure of tuning curve sharpness that is invariant with respect to the mean and variance of the tuning curve and is straightforward to apply to a wide range of tuning, including simple orientation tuning curves and complex object tuning curves that often cannot even be described parametrically. Because skewness does not depend on a specific model or function of tuning, it is especially appealing to cases of sharpening where recurrent interactions among neurons produce sharper tuning curves that deviate in a complex manner from the feedforward function of tuning. Since tuning curves for all neurons are not typically well described by a single parametric function, this model independence additionally allows skewness to be applied to all recorded neurons, maximizing the statistical power of a set of data. We also compare skewness with other nonparametric measures of tuning curve sharpness and selectivity. Compared to these other nonparametric measures tested, skewness is best used for capturing the sharpness of multimodal tuning curves defined by narrow peaks (maximum) and broad valleys (minima). Finally, we provide a more formal definition of sharpness using a shape-based information gain measure and derive and show that skewness is correlated with this definition. PMID:24555451
Jabbar, Ahmed Najah
2018-04-13
This letter suggests two new types of asymmetrical higher-order kernels (HOK) that are generated using the orthogonal polynomials Laguerre (positive or right skew) and Bessel (negative or left skew). These skewed HOK are implemented in the blind source separation/independent component analysis (BSS/ICA) algorithm. The tests for these proposed HOK are accomplished using three scenarios to simulate a real environment using actual sound sources, an environment of mixtures of multimodal fast-changing probability density function (pdf) sources that represent a challenge to the symmetrical HOK, and an environment of an adverse case (near gaussian). The separation is performed by minimizing the mutual information (MI) among the mixed sources. The performance of the skewed kernels is compared to the performance of the standard kernels such as Epanechnikov, bisquare, trisquare, and gaussian and the performance of the symmetrical HOK generated using the polynomials Chebyshev1, Chebyshev2, Gegenbauer, Jacobi, and Legendre to the tenth order. The gaussian HOK are generated using the Hermite polynomial and the Wand and Schucany procedure. The comparison among the 96 kernels is based on the average intersymbol interference ratio (AISIR) and the time needed to complete the separation. In terms of AISIR, the skewed kernels' performance is better than that of the standard kernels and rivals most of the symmetrical kernels' performance. The importance of these new skewed HOK is manifested in the environment of the multimodal pdf mixtures. In such an environment, the skewed HOK come in first place compared with the symmetrical HOK. These new families can substitute for symmetrical HOKs in such applications.
X Chromosome Inactivation in Women with Alcoholism
Manzardo, Ann M.; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C.; Poje, Albert B.; Butler, Merlin G.
2012-01-01
Background All female mammals with two X chromosomes balance gene expression with males having only one X by inactivating one of their Xs (X chromosome inactivation, XCI). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. Methods The pattern of XCI was examined in DNA isolated in blood from 44 adult females meeting DSM IV criteria for an Alcohol Use Disorder, and 45 control females with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50–64:36), moderately skewed (65:35–80:20) and highly skewed (>80:20). Results XCI status from informative females with alcoholism was found to be random in 59% (n=26), moderately skewed in 27% (n=12) or highly skewed in 14% (n=6). Control subjects showed 60%, 29% and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ2 =0.14, 2 df, p=0.93). Conclusions Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (non-random) expression of X-linked gene(s) or defects in alcoholism among females. PMID:22375556
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mandel, Kaisey S.; Kirshner, Robert P.; Foley, Ryan J., E-mail: kmandel@cfa.harvard.edu
2014-12-20
We investigate the statistical dependence of the peak intrinsic colors of Type Ia supernovae (SNe Ia) on their expansion velocities at maximum light, measured from the Si II λ6355 spectral feature. We construct a new hierarchical Bayesian regression model, accounting for the random effects of intrinsic scatter, measurement error, and reddening by host galaxy dust, and implement a Gibbs sampler and deviance information criteria to estimate the correlation. The method is applied to the apparent colors from BVRI light curves and Si II velocity data for 79 nearby SNe Ia. The apparent color distributions of high-velocity (HV) and normal velocitymore » (NV) supernovae exhibit significant discrepancies for B – V and B – R, but not other colors. Hence, they are likely due to intrinsic color differences originating in the B band, rather than dust reddening. The mean intrinsic B – V and B – R color differences between HV and NV groups are 0.06 ± 0.02 and 0.09 ± 0.02 mag, respectively. A linear model finds significant slopes of –0.021 ± 0.006 and –0.030 ± 0.009 mag (10{sup 3} km s{sup –1}){sup –1} for intrinsic B – V and B – R colors versus velocity, respectively. Because the ejecta velocity distribution is skewed toward high velocities, these effects imply non-Gaussian intrinsic color distributions with skewness up to +0.3. Accounting for the intrinsic-color-velocity correlation results in corrections to A{sub V} extinction estimates as large as –0.12 mag for HV SNe Ia and +0.06 mag for NV events. Velocity measurements from SN Ia spectra have the potential to diminish systematic errors from the confounding of intrinsic colors and dust reddening affecting supernova distances.« less
Hoenner, Xavier; Whiting, Scott D; Hindell, Mark A; McMahon, Clive R
2012-01-01
Accurately quantifying animals' spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68(th) percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤ 0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student's t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2 ± 2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes.
Particle beam injector system and method
Guethlein, Gary
2013-06-18
Methods and devices enable coupling of a charged particle beam to a radio frequency quadrupole accelerator. Coupling of the charged particle beam is accomplished, at least in-part, by relying on of sensitivity of the input phase space acceptance of the radio frequency quadrupole to the angle of the input charged particle beam. A first electric field across a beam deflector deflects the particle beam at an angle that is beyond the acceptance angle of the radio frequency quadrupole. By momentarily reversing or reducing the established electric field, a narrow portion of the charged particle beam is deflected at an angle within the acceptance angle of the radio frequency quadrupole. In another configuration, beam is directed at an angle within the acceptance angle of the radio frequency quadrupole by the first electric field and is deflected beyond the acceptance angle of the radio frequency quadrupole due to the second electric field.
Defining surfaces for skewed, highly variable data
Helsel, D.R.; Ryker, S.J.
2002-01-01
Skewness of environmental data is often caused by more than simply a handful of outliers in an otherwise normal distribution. Statistical procedures for such datasets must be sufficiently robust to deal with distributions that are strongly non-normal, containing both a large proportion of outliers and a skewed main body of data. In the field of water quality, skewness is commonly associated with large variation over short distances. Spatial analysis of such data generally requires either considerable effort at modeling or the use of robust procedures not strongly affected by skewness and local variability. Using a skewed dataset of 675 nitrate measurements in ground water, commonly used methods for defining a surface (least-squares regression and kriging) are compared to a more robust method (loess). Three choices are critical in defining a surface: (i) is the surface to be a central mean or median surface? (ii) is either a well-fitting transformation or a robust and scale-independent measure of center used? (iii) does local spatial autocorrelation assist in or detract from addressing objectives? Published in 2002 by John Wiley & Sons, Ltd.
Modeling absolute differences in life expectancy with a censored skew-normal regression approach
Clough-Gorr, Kerri; Zwahlen, Marcel
2015-01-01
Parameter estimates from commonly used multivariable parametric survival regression models do not directly quantify differences in years of life expectancy. Gaussian linear regression models give results in terms of absolute mean differences, but are not appropriate in modeling life expectancy, because in many situations time to death has a negative skewed distribution. A regression approach using a skew-normal distribution would be an alternative to parametric survival models in the modeling of life expectancy, because parameter estimates can be interpreted in terms of survival time differences while allowing for skewness of the distribution. In this paper we show how to use the skew-normal regression so that censored and left-truncated observations are accounted for. With this we model differences in life expectancy using data from the Swiss National Cohort Study and from official life expectancy estimates and compare the results with those derived from commonly used survival regression models. We conclude that a censored skew-normal survival regression approach for left-truncated observations can be used to model differences in life expectancy across covariates of interest. PMID:26339544
NASA Astrophysics Data System (ADS)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang; Keating, Brian
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck, using frequency maps of the HFI instrument and the Sunyaev-Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing-SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gas pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck.
Landfors, Mattias; Philip, Philge; Rydén, Patrik; Stenberg, Per
2011-01-01
Genome-wide analysis of gene expression or protein binding patterns using different array or sequencing based technologies is now routinely performed to compare different populations, such as treatment and reference groups. It is often necessary to normalize the data obtained to remove technical variation introduced in the course of conducting experimental work, but standard normalization techniques are not capable of eliminating technical bias in cases where the distribution of the truly altered variables is skewed, i.e. when a large fraction of the variables are either positively or negatively affected by the treatment. However, several experiments are likely to generate such skewed distributions, including ChIP-chip experiments for the study of chromatin, gene expression experiments for the study of apoptosis, and SNP-studies of copy number variation in normal and tumour tissues. A preliminary study using spike-in array data established that the capacity of an experiment to identify altered variables and generate unbiased estimates of the fold change decreases as the fraction of altered variables and the skewness increases. We propose the following work-flow for analyzing high-dimensional experiments with regions of altered variables: (1) Pre-process raw data using one of the standard normalization techniques. (2) Investigate if the distribution of the altered variables is skewed. (3) If the distribution is not believed to be skewed, no additional normalization is needed. Otherwise, re-normalize the data using a novel HMM-assisted normalization procedure. (4) Perform downstream analysis. Here, ChIP-chip data and simulated data were used to evaluate the performance of the work-flow. It was found that skewed distributions can be detected by using the novel DSE-test (Detection of Skewed Experiments). Furthermore, applying the HMM-assisted normalization to experiments where the distribution of the truly altered variables is skewed results in considerably higher sensitivity and lower bias than can be attained using standard and invariant normalization methods. PMID:22132175
Staid, Andrea; Watson, Jean -Paul; Wets, Roger J. -B.; ...
2017-07-11
Forecasts of available wind power are critical in key electric power systems operations planning problems, including economic dispatch and unit commitment. Such forecasts are necessarily uncertain, limiting the reliability and cost effectiveness of operations planning models based on a single deterministic or “point” forecast. A common approach to address this limitation involves the use of a number of probabilistic scenarios, each specifying a possible trajectory of wind power production, with associated probability. We present and analyze a novel method for generating probabilistic wind power scenarios, leveraging available historical information in the form of forecasted and corresponding observed wind power timemore » series. We estimate non-parametric forecast error densities, specifically using epi-spline basis functions, allowing us to capture the skewed and non-parametric nature of error densities observed in real-world data. We then describe a method to generate probabilistic scenarios from these basis functions that allows users to control for the degree to which extreme errors are captured.We compare the performance of our approach to the current state-of-the-art considering publicly available data associated with the Bonneville Power Administration, analyzing aggregate production of a number of wind farms over a large geographic region. Finally, we discuss the advantages of our approach in the context of specific power systems operations planning problems: stochastic unit commitment and economic dispatch. Here, our methodology is embodied in the joint Sandia – University of California Davis Prescient software package for assessing and analyzing stochastic operations strategies.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Staid, Andrea; Watson, Jean -Paul; Wets, Roger J. -B.
Forecasts of available wind power are critical in key electric power systems operations planning problems, including economic dispatch and unit commitment. Such forecasts are necessarily uncertain, limiting the reliability and cost effectiveness of operations planning models based on a single deterministic or “point” forecast. A common approach to address this limitation involves the use of a number of probabilistic scenarios, each specifying a possible trajectory of wind power production, with associated probability. We present and analyze a novel method for generating probabilistic wind power scenarios, leveraging available historical information in the form of forecasted and corresponding observed wind power timemore » series. We estimate non-parametric forecast error densities, specifically using epi-spline basis functions, allowing us to capture the skewed and non-parametric nature of error densities observed in real-world data. We then describe a method to generate probabilistic scenarios from these basis functions that allows users to control for the degree to which extreme errors are captured.We compare the performance of our approach to the current state-of-the-art considering publicly available data associated with the Bonneville Power Administration, analyzing aggregate production of a number of wind farms over a large geographic region. Finally, we discuss the advantages of our approach in the context of specific power systems operations planning problems: stochastic unit commitment and economic dispatch. Here, our methodology is embodied in the joint Sandia – University of California Davis Prescient software package for assessing and analyzing stochastic operations strategies.« less
Klystron having electrostatic quadrupole focusing arrangement
Maschke, Alfred W.
1983-08-30
A klystron includes a source for emitting at least one electron beam, and an accelerator for accelarating the beam in a given direction through a number of drift tube sections successively aligned relative to one another in the direction of the beam. A number of electrostatic quadrupole arrays are successively aligned relative to one another along at least one of the drift tube sections in the beam direction for focusing the electron beam. Each of the electrostatic quadrupole arrays forms a different quadrupole for each electron beam. Two or more electron beams can be maintained in parallel relationship by the quadrupole arrays, thereby enabling space charge limitations encountered with conventional single beam klystrons to be overcome.
Klystron having electrostatic quadrupole focusing arrangement
Maschke, A.W.
1983-08-30
A klystron includes a source for emitting at least one electron beam, and an accelerator for accelerating the beam in a given direction through a number of drift tube sections successively aligned relative to one another in the direction of the beam. A number of electrostatic quadrupole arrays are successively aligned relative to one another along at least one of the drift tube sections in the beam direction for focusing the electron beam. Each of the electrostatic quadrupole arrays forms a different quadrupole for each electron beam. Two or more electron beams can be maintained in parallel relationship by the quadrupole arrays, thereby enabling space charge limitations encountered with conventional single beam klystrons to be overcome. 4 figs.
A modified quadrupole mass spectrometer with custom RF link rods driver for remote operation
NASA Technical Reports Server (NTRS)
Tashbar, P. W.; Nisen, D. B.; Moore, W. W., Jr.
1973-01-01
A commercial quadrupole residual gas analyzer system has been upgraded for operation at extended cable lengths. Operation inside a vacuum chamber for the standard quadrupole nude head is limited to approximately 2 m from its externally located rf/dc generator because of the detuning of the rf oscillator circuits by the coaxial cable reactance. The advance of long distance remote operation inside a vacuum chamber for distances of 45 and 60 m was made possible without altering the quadrupole's rf/dc generator circuit by employing an rf link to drive the quadrupole rods. Applications of the system have been accomplished for in situ space simulation thermal/vacuum testing of sophisticated payloads.
Opposite GC skews at the 5' and 3' ends of genes in unicellular fungi
2011-01-01
Background GC-skews have previously been linked to transcription in some eukaryotes. They have been associated with transcription start sites, with the coding strand G-biased in mammals and C-biased in fungi and invertebrates. Results We show a consistent and highly significant pattern of GC-skew within genes of almost all unicellular fungi. The pattern of GC-skew is asymmetrical: the coding strand of genes is typically C-biased at the 5' ends but G-biased at the 3' ends, with intermediate skews at the middle of genes. Thus, the initiation, elongation, and termination phases of transcription are associated with different skews. This pattern influences the encoded proteins by generating differential usage of amino acids at the 5' and 3' ends of genes. These biases also affect fourfold-degenerate positions and extend into promoters and 3' UTRs, indicating that skews cannot be accounted by selection for protein function or translation. Conclusions We propose two explanations, the mutational pressure hypothesis, and the adaptive hypothesis. The mutational pressure hypothesis is that different co-factors bind to RNA pol II at different phases of transcription, producing different mutational regimes. The adaptive hypothesis is that cytidine triphosphate deficiency may lead to C-avoidance at the 3' ends of transcripts to control the flow of RNA pol II molecules and reduce their frequency of collisions. PMID:22208287
Effect of skew angle on second harmonic guided wave measurement in composite plates
NASA Astrophysics Data System (ADS)
Cho, Hwanjeong; Choi, Sungho; Lissenden, Cliff J.
2017-02-01
Waves propagating in anisotropic media are subject to skewing effects due to the media having directional wave speed dependence, which is characterized by slowness curves. Likewise, the generation of second harmonics is sensitive to micro-scale damage that is generally not detectable from linear features of ultrasonic waves. Here, the effect of skew angle on second harmonic guided wave measurement in a transversely isotropic lamina and a quasi-isotropic laminate are numerically studied. The strain energy density function for a nonlinear transversely isotropic material is formulated in terms of the Green-Lagrange strain invariants. The guided wave mode pairs for cumulative second harmonic generation in the plate are selected in accordance with the internal resonance criteria - i.e., phase matching and non-zero power flux. Moreover, the skew angle dispersion curves for the mode pairs are obtained from the semi-analytical finite element method using the derivative of the slowness curve. The skew angles of the primary and secondary wave modes are calculated and wave propagation simulations are carried out using COMSOL. Numerical simulations revealed that the effect of skew angle mismatch can be significant for second harmonic generation in anisotropic media. The importance of skew angle matching on cumulative second harmonic generation is emphasized and the accompanying issue of the selection of internally resonant mode pairs for both a unidirectional transversely isotropic lamina and a quasi-isotropic laminate is demonstrated.
Average BER and outage probability of the ground-to-train OWC link in turbulence with rain
NASA Astrophysics Data System (ADS)
Zhang, Yixin; Yang, Yanqiu; Hu, Beibei; Yu, Lin; Hu, Zheng-Da
2017-09-01
The bit-error rate (BER) and outage probability of optical wireless communication (OWC) link for the ground-to-train of the curved track in turbulence with rain is evaluated. Considering the re-modulation effects of raining fluctuation on optical signal modulated by turbulence, we set up the models of average BER and outage probability in the present of pointing errors, based on the double inverse Gaussian (IG) statistical distribution model. The numerical results indicate that, for the same covered track length, the larger curvature radius increases the outage probability and average BER. The performance of the OWC link in turbulence with rain is limited mainly by the rain rate and pointing errors which are induced by the beam wander and train vibration. The effect of the rain rate on the performance of the link is more severe than the atmospheric turbulence, but the fluctuation owing to the atmospheric turbulence affects the laser beam propagation more greatly than the skewness of the rain distribution. Besides, the turbulence-induced beam wander has a more significant impact on the system in heavier rain. We can choose the size of transmitting and receiving apertures and improve the shockproof performance of the tracks to optimize the communication performance of the system.
Distinguishing models of reionization using future radio observations of 21-cm 1-point statistics
NASA Astrophysics Data System (ADS)
Watkinson, C. A.; Pritchard, J. R.
2014-10-01
We explore the impact of reionization topology on 21-cm statistics. Four reionization models are presented which emulate large ionized bubbles around overdense regions (21CMFAST/global-inside-out), small ionized bubbles in overdense regions (local-inside-out), large ionized bubbles around underdense regions (global-outside-in) and small ionized bubbles around underdense regions (local-outside-in). We show that first generation instruments might struggle to distinguish global models using the shape of the power spectrum alone. All instruments considered are capable of breaking this degeneracy with the variance, which is higher in outside-in models. Global models can also be distinguished at small scales from a boost in the power spectrum from a positive correlation between the density and neutral-fraction fields in outside-in models. Negative skewness is found to be unique to inside-out models and we find that pre-Square Kilometre Array (SKA) instruments could detect this feature in maps smoothed to reduce noise errors. The early, mid- and late phases of reionization imprint signatures in the brightness-temperature moments, we examine their model dependence and find pre-SKA instruments capable of exploiting these timing constraints in smoothed maps. The dimensional skewness is introduced and is shown to have stronger signatures of the early and mid-phase timing if the inside-out scenario is correct.
Building confidence and credibility into CAD with belief decision trees
NASA Astrophysics Data System (ADS)
Affenit, Rachael N.; Barns, Erik R.; Furst, Jacob D.; Rasin, Alexander; Raicu, Daniela S.
2017-03-01
Creating classifiers for computer-aided diagnosis in the absence of ground truth is a challenging problem. Using experts' opinions as reference truth is difficult because the variability in the experts' interpretations introduces uncertainty in the labeled diagnostic data. This uncertainty translates into noise, which can significantly affect the performance of any classifier on test data. To address this problem, we propose a new label set weighting approach to combine the experts' interpretations and their variability, as well as a selective iterative classification (SIC) approach that is based on conformal prediction. Using the NIH/NCI Lung Image Database Consortium (LIDC) dataset in which four radiologists interpreted the lung nodule characteristics, including the degree of malignancy, we illustrate the benefits of the proposed approach. Our results show that the proposed 2-label-weighted approach significantly outperforms the accuracy of the original 5- label and 2-label-unweighted classification approaches by 39.9% and 7.6%, respectively. We also found that the weighted 2-label models produce higher skewness values by 1.05 and 0.61 for non-SIC and SIC respectively on root mean square error (RMSE) distributions. When each approach was combined with selective iterative classification, this further improved the accuracy of classification for the 2-weighted-label by 7.5% over the original, and improved the skewness of the 5-label and 2-unweighted-label by 0.22 and 0.44 respectively.
Exploiting Molecular Weight Distribution Shape to Tune Domain Spacing in Block Copolymer Thin Films.
Gentekos, Dillon T; Jia, Junteng; Tirado, Erika S; Barteau, Katherine P; Smilgies, Detlef-M; DiStasio, Robert A; Fors, Brett P
2018-04-04
We report a method for tuning the domain spacing ( D sp ) of self-assembled block copolymer thin films of poly(styrene- block-methyl methacrylate) (PS- b-PMMA) over a large range of lamellar periods. By modifying the molecular weight distribution (MWD) shape (including both the breadth and skew) of the PS block via temporal control of polymer chain initiation in anionic polymerization, we observe increases of up to 41% in D sp for polymers with the same overall molecular weight ( M n ≈ 125 kg mol -1 ) without significantly changing the overall morphology or chemical composition of the final material. In conjunction with our experimental efforts, we have utilized concepts from population statistics and least-squares analysis to develop a model for predicting D sp based on the first three moments of the MWDs. This statistical model reproduces experimental D sp values with high fidelity (with mean absolute errors of 1.2 nm or 1.8%) and provides novel physical insight into the individual and collective roles played by the MWD moments in determining this property of interest. This work demonstrates that both MWD breadth and skew have a profound influence over D sp , thereby providing an experimental and conceptual platform for exploiting MWD shape as a simple and modular handle for fine-tuning D sp in block copolymer thin films.
X chromosome inactivation in women with alcoholism.
Manzardo, Ann M; Henkhaus, Rebecca; Hidaka, Brandon; Penick, Elizabeth C; Poje, Albert B; Butler, Merlin G
2012-08-01
All female mammals with 2 X chromosomes balance gene expression with males having only 1 X by inactivating one of their X chromosomes (X chromosome inactivation [XCI]). Analysis of XCI in females offers the opportunity to investigate both X-linked genetic factors and early embryonic development that may contribute to alcoholism. Increases in the prevalence of skewing of XCI in women with alcoholism could implicate biological risk factors. The pattern of XCI was examined in DNA isolated in blood from 44 adult women meeting DSM-IV criteria for an alcohol use disorder and 45 control women with no known history of alcohol abuse or dependence. XCI status was determined by analyzing digested and undigested polymerase chain reaction (PCR) products of the polymorphic androgen receptor (AR) gene located on the X chromosome. Subjects were categorized into 3 groups based upon the degree of XCI skewness: random (50:50 to 64:36%), moderately skewed (65:35 to 80:20%), and highly skewed (>80:20%). XCI status from informative women with alcoholism was found to be random in 59% (n = 26), moderately skewed in 27% (n = 12), or highly skewed in 14% (n = 6). Control subjects showed 60, 29, and 11%, respectively. The distribution of skewed XCI observed among women with alcoholism did not differ statistically from that of control subjects (χ(2) test = 0.14, 2 df, p = 0.93). Our data did not support an increase in XCI skewness among women with alcoholism or implicate early developmental events associated with embryonic cell loss or unequal (nonrandom) expression of X-linked gene(s) or defects in alcoholism among women. Copyright © 2012 by the Research Society on Alcoholism.
NASA Astrophysics Data System (ADS)
Shen, Dehua; Liu, Lanbiao; Zhang, Yongjie
2018-01-01
The constantly increasing utilization of social media as the alternative information channel, e.g., Twitter, provides us a unique opportunity to investigate the dynamics of the financial market. In this paper, we employ the daily happiness sentiment extracted from Twitter as the proxy for the online sentiment dynamics and investigate its association with the skewness of stock returns of 26 international stock market index returns. The empirical results show that: (1) by dividing the daily happiness sentiment into quintiles from the least to the most happiness days, the skewness of the Most-happiness subgroup is significantly larger than that of the Least-happiness subgroup. Besides, there exist significant differences in any pair of subgroups; (2) in an event study methodology, we further show that the skewness around the highest happiness days is significantly larger than the skewness around the lowest happiness days.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Timmons, Nicholas; Cooray, Asantha; Feng, Chang
2017-11-01
We measure the cosmic microwave background (CMB) skewness power spectrum in Planck , using frequency maps of the HFI instrument and the Sunyaev–Zel’dovich (SZ) component map. The two-to-one skewness power spectrum measures the cross-correlation between CMB lensing and the thermal SZ effect. We also directly measure the same cross-correlation using the Planck CMB lensing map and the SZ map and compare it to the cross-correlation derived from the skewness power spectrum. We model fit the SZ power spectrum and CMB lensing–SZ cross-power spectrum via the skewness power spectrum to constrain the gas pressure profile of dark matter halos. The gasmore » pressure profile is compared to existing measurements in the literature including a direct estimate based on the stacking of SZ clusters in Planck .« less
NASA Astrophysics Data System (ADS)
Cordle, Michael; Rea, Chris; Jury, Jason; Rausch, Tim; Hardie, Cal; Gage, Edward; Victora, R. H.
2018-05-01
This study aims to investigate the impact that factors such as skew, radius, and transition curvature have on areal density capability in heat-assisted magnetic recording hard disk drives. We explore a "ballistic seek" approach for capturing in-situ scan line images of the magnetization footprint on the recording media, and extract parametric results of recording characteristics such as transition curvature. We take full advantage of the significantly improved cycle time to apply a statistical treatment to relatively large samples of experimental curvature data to evaluate measurement capability. Quantitative analysis of factors that impact transition curvature reveals an asymmetry in the curvature profile that is strongly correlated to skew angle. Another less obvious skew-related effect is an overall decrease in curvature as skew angle increases. Using conventional perpendicular magnetic recording as the reference case, we characterize areal density capability as a function of recording position.
Experimental investigation of the noise emission of axial fans under distorted inflow conditions
NASA Astrophysics Data System (ADS)
Zenger, Florian J.; Renz, Andreas; Becher, Marcus; Becker, Stefan
2016-11-01
An experimental investigation on the noise emission of axial fans under distorted inflow conditions was conducted. Three fans with forward-skewed fan blades and three fans with backward-skewed fan blades and a common operating point were designed with a 2D element blade method. Two approaches were adopted to modify the inflow conditions: first, the inflow turbulence intensity was increased by two different rectangular grids and second, the inflow velocity profile was changed to an asymmetric characteristic by two grids with a distinct bar stacking. An increase in the inflow turbulence intensity affects both tonal and broadband noise, whereas a non-uniform velocity profile at the inlet influences mainly tonal components. The magnitude of this effect is not the same for all fans but is dependent on the blade skew. The impact is greater for the forward-skewed fans than for the backward-skewed and thus directly linked to the fan blade geometry.
Exploring Hypersonic, Unstructured-Grid Issues through Structured Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Ali R.; Kleb, Bill
2007-01-01
Pure-tetrahedral unstructured grids have been shown to produce asymmetric heat transfer rates for symmetric problems. Meanwhile, two-dimensional structured grids produce symmetric solutions and as documented here, introducing a spanwise degree of freedom to these structured grids also yields symmetric solutions. The effects of grid skewness and other perturbations of structured-grids are investigated to uncover possible mechanisms behind the unstructured-grid solution asymmetries. By using controlled experiments around a known, good solution, the effects of particular grid pathologies are uncovered. These structured-grid experiments reveal that similar solution degradation occurs as for unstructured grids, especially for heat transfer rates. Non-smooth grids within the boundary layer is also shown to produce large local errors in heat flux but do not affect surface pressures.
Experimental quantum verification in the presence of temporally correlated noise
NASA Astrophysics Data System (ADS)
Mavadia, S.; Edmunds, C. L.; Hempel, C.; Ball, H.; Roy, F.; Stace, T. M.; Biercuk, M. J.
2018-02-01
Growth in the capabilities of quantum information hardware mandates access to techniques for performance verification that function under realistic laboratory conditions. Here we experimentally characterise the impact of common temporally correlated noise processes on both randomised benchmarking (RB) and gate-set tomography (GST). Our analysis highlights the role of sequence structure in enhancing or suppressing the sensitivity of quantum verification protocols to either slowly or rapidly varying noise, which we treat in the limiting cases of quasi-DC miscalibration and white noise power spectra. We perform experiments with a single trapped 171Yb+ ion-qubit and inject engineered noise (" separators="∝σ^ z ) to probe protocol performance. Experiments on RB validate predictions that measured fidelities over sequences are described by a gamma distribution varying between approximately Gaussian, and a broad, highly skewed distribution for rapidly and slowly varying noise, respectively. Similarly we find a strong gate set dependence of default experimental GST procedures in the presence of correlated errors, leading to significant deviations between estimated and calculated diamond distances in the presence of correlated σ^ z errors. Numerical simulations demonstrate that expansion of the gate set to include negative rotations can suppress these discrepancies and increase reported diamond distances by orders of magnitude for the same error processes. Similar effects do not occur for correlated σ^ x or σ^ y errors or depolarising noise processes, highlighting the impact of the critical interplay of selected gate set and the gauge optimisation process on the meaning of the reported diamond norm in correlated noise environments.
Mean, Median, and Skew: Correcting a Textbook Rule
ERIC Educational Resources Information Center
von Hippel, Paul T.
2005-01-01
Many textbooks teach a rule of thumb stating that the mean is right of the median under right skew, and left of the median under left skew. This rule fails with surprising frequency. It can fail in multimodal distributions, or in distributions where one tail is long but the other is heavy. Most commonly, though, the rule fails in discrete…
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kellö, Vladimir
Highly correlated scalar relativistic calculations of electric field gradients at nuclei in diatomic molecules in combination with accurate nuclear quadrupole coupling constants obtained from microwave spectroscopy are used for determination of nuclear quadrupole moments.
Generation of net sediment transport by velocity skewness in oscillatory sheet flow
NASA Astrophysics Data System (ADS)
Chen, Xin; Li, Yong; Chen, Genfa; Wang, Fujun; Tang, Xuelin
2018-01-01
This study utilizes a qualitative approach and a two-phase numerical model to investigate net sediment transport caused by velocity skewness beneath oscillatory sheet flow and current. The qualitative approach is derived based on the pseudo-laminar approximation of boundary layer velocity and exponential approximation of concentration. The two-phase model can obtain well the instantaneous erosion depth, sediment flux, boundary layer thickness, and sediment transport rate. It can especially illustrate the difference between positive and negative flow stages caused by velocity skewness, which is considerably important in determining the net boundary layer flow and sediment transport direction. The two-phase model also explains the effect of sediment diameter and phase-lag to sediment transport by comparing the instantaneous-type formulas to better illustrate velocity skewness effect. In previous studies about sheet flow transport in pure velocity-skewed flows, net sediment transport is only attributed to the phase-lag effect. In the present study with the qualitative approach and two-phase model, phase-lag effect is shown important but not sufficient for the net sediment transport beneath pure velocity-skewed flow and current, while the asymmetric wave boundary layer development between positive and negative flow stages also contributes to the sediment transport.
A concept for canceling the leakage field inside the stored beam chamber of a septum magnet
NASA Astrophysics Data System (ADS)
Abliz, M.; Jaski, M.; Xiao, A.; Jain, A.; Wienands, U.; Cease, H.; Borland, M.; Decker, G.; Kerby, J.
2018-04-01
The Advanced Photon Source (APS) is planning to upgrade its storage ring from a double-bend achromat to a multi-bend achromat lattice as part of the APS Upgrade Project (APS-U). A swap-out injection scheme is planned for the APS-U in order to keep the beam current constant and to reduce the dynamic aperture requirements. The injection scheme, combined with the constraints in the booster to storage ring transfer region of the APS-U, results in requiring a septum magnet which deflects the injected 6 GeV electron beam by 89 mrad, while not appreciably disturbing the stored beam. The proposed magnet is straight; however, it is rotated in yaw, roll, and pitch from the stored beam chamber to meet the on-axis swap-out injection requirements for the APS-U lattice. The concept utilizes cancellation of the leakage field inside the 8 mm x 6 mm super-ellipsoidal stored beam chamber. As a result, the horizontal deflection angle of the 6 GeV stored beam is reduced to less than 1 μrad with only a 2-mm-thick septum separating the stored beam and the 1.06 T field seen by the injected beam. This design also helps to minimize the integrated skew quadrupole and normal sextupole fields inside the stored beam chamber.
An inventory of bispectrum estimators for redshift space distortions
NASA Astrophysics Data System (ADS)
Regan, Donough
2017-12-01
In order to best improve constraints on cosmological parameters and on models of modified gravity using current and future galaxy surveys it is necessary maximally exploit the available data. As redshift-space distortions mean statistical translation invariance is broken for galaxy observations, this will require measurement of the monopole, quadrupole and hexadecapole of not just the galaxy power spectrum, but also the galaxy bispectrum. A recent (2015) paper by Scoccimarro demonstrated how the standard bispectrum estimator may be expressed in terms of Fast Fourier Transforms (FFTs) to afford an extremely efficient algorithm, allowing the bispectrum multipoles on all scales and triangle shapes to be measured in comparable time to those of the power spectrum. In this paper we present a suite of alternative proxies to measure the three-point correlation multipoles. In particular, we describe a modal (or plane wave) decomposition to capture the information in each multipole in a series of basis coefficients, and also describe three compressed estimators formed using the skew-spectrum, the line correlation function and the integrated bispectrum, respectively. As well as each of the estimators offering a different measurement channel, and thereby a robustness check, it is expected that some (especially the modal estimator) will offer a vast data compression, and so a much reduced covariance matrix. This compression may be vital to reduce the computational load involved in extracting the available three-point information.
Gu, Shoou-Lian Hwang; Gau, Susan Shur-Fen; Tzang, Shyh-Weir; Hsu, Wen-Yau
2013-11-01
We investigated the three parameters (mu, sigma, tau) of ex-Gaussian distribution of RT derived from the Conners' continuous performance test (CCPT) and examined the moderating effects of the energetic factors (the inter-stimulus intervals (ISIs) and Blocks) among these three parameters, especially tau, an index describing the positive skew of RT distribution. We assessed 195 adolescents with DSM-IV ADHD, and 90 typically developing (TD) adolescents, aged 10-16. Participants and their parents received psychiatric interviews to confirm the diagnosis of ADHD and other psychiatric disorders. Participants also received intelligence (WISC-III) and CCPT assessments. We found that participants with ADHD had a smaller mu, and larger tau. As the ISI/Block increased, the magnitude of group difference in tau increased. Among the three ex-Gaussian parameters, tau was positively associated with omission errors, and mu was negatively associated with commission errors. The moderating effects of ISIs and Blocks on tau parameters suggested that the ex-Gaussian parameters could offer more information about the attention state in vigilance task, especially in ADHD. Copyright © 2013 Elsevier Ltd. All rights reserved.
Considerations on the mechanisms of alternating skew deviation in patients with cerebellar lesions.
Zee, D S
1996-01-01
Alternating skew deviation, in which the side of the higher eye changes depending upon whether gaze is directed to the left or the right, is a frequent sign in patients with posterior fossa lesions, including those restricted to the cerebellum. Here we propose a mechanism for alternating skews related to the otolith-ocular responses to fore and aft pitch of the head in lateral-eyed animals. In lateral-eyed animals the expected response to a static head pitch is cyclorotation of the eyes. But if the eyes are rotated horizontally in the orbit, away from the primary position, a compensatory skew deviation should also appear. The direction of the skew would depend upon whether the eyes were directed to the right (left eye forward, right eye backward) or to the left (left eye backward, right eye forward). In contrast, for frontal-eyed animals, skew deviations are counterproductive because they create diplopia and interfere with binocular vision. We attribute the emergence of skew deviations in frontal-eyed animals in pathological conditions to 1) an imbalance in otolithocular pathways and 2) a loss of the component of ocular motor innervation that normally corrects for the differences in pulling directions and strengths of the various ocular muscles as the eyes change position in the orbit. Such a compensatory mechanism is necessary to ensure optimal binocular visual function during and after head motion. This compensatory mechanism may depend upon the cerebellum.
Southard, Rodney E.; Veilleux, Andrea G.
2014-01-01
Regression analysis techniques were used to develop a set of equations for rural ungaged stream sites for estimating discharges with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities, which are equivalent to annual flood-frequency recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years, respectively. Basin and climatic characteristics were computed using geographic information software and digital geospatial data. A total of 35 characteristics were computed for use in preliminary statewide and regional regression analyses. Annual exceedance-probability discharge estimates were computed for 278 streamgages by using the expected moments algorithm to fit a log-Pearson Type III distribution to the logarithms of annual peak discharges for each streamgage using annual peak-discharge data from water year 1844 to 2012. Low-outlier and historic information were incorporated into the annual exceedance-probability analyses, and a generalized multiple Grubbs-Beck test was used to detect potentially influential low floods. Annual peak flows less than a minimum recordable discharge at a streamgage were incorporated into the at-site station analyses. An updated regional skew coefficient was determined for the State of Missouri using Bayesian weighted least-squares/generalized least squares regression analyses. At-site skew estimates for 108 long-term streamgages with 30 or more years of record and the 35 basin characteristics defined for this study were used to estimate the regional variability in skew. However, a constant generalized-skew value of -0.30 and a mean square error of 0.14 were determined in this study. Previous flood studies indicated that the distinct physical features of the three physiographic provinces have a pronounced effect on the magnitude of flood peaks. Trends in the magnitudes of the residuals from preliminary statewide regression analyses from previous studies confirmed that regional analyses in this study were similar and related to three primary physiographic provinces. The final regional regression analyses resulted in three sets of equations. For Regions 1 and 2, the basin characteristics of drainage area and basin shape factor were statistically significant. For Region 3, because of the small amount of data from streamgages, only drainage area was statistically significant. Average standard errors of prediction ranged from 28.7 to 38.4 percent for flood region 1, 24.1 to 43.5 percent for flood region 2, and 25.8 to 30.5 percent for region 3. The regional regression equations are only applicable to stream sites in Missouri with flows not significantly affected by regulation, channelization, backwater, diversion, or urbanization. Basins with about 5 percent or less impervious area were considered to be rural. Applicability of the equations are limited to the basin characteristic values that range from 0.11 to 8,212.38 square miles (mi2) and basin shape from 2.25 to 26.59 for Region 1, 0.17 to 4,008.92 mi2 and basin shape 2.04 to 26.89 for Region 2, and 2.12 to 2,177.58 mi2 for Region 3. Annual peak data from streamgages were used to qualitatively assess the largest floods recorded at streamgages in Missouri since the 1915 water year. Based on existing streamgage data, the 1983 flood event was the largest flood event on record since 1915. The next five largest flood events, in descending order, took place in 1993, 1973, 2008, 1994 and 1915. Since 1915, five of six of the largest floods on record occurred from 1973 to 2012.
del Nogal Sánchez, Miguel; Pérez-Pavón, José Luis; Moreno Cordero, Bernardo
2010-07-01
In the present work, a strategy for the qualitative and quantitative analysis of 24 volatile compounds listed as suspected allergens in cosmetics by the European Union is reported. The list includes benzyl alcohol, limonene, linalool, methyl 2-octynoate, beta-citronellol, geraniol, citral (two isomers), 7-hydroxycitronellal, anisyl alcohol, cinnamal, cinnamyl alcohol, eugenol, isoeugenol (two isomers), coumarin, alpha-isomethyl ionone, lilial, alpha-amylcinnamal, lyral, alpha-amylcinnamyl alcohol, farnesol (three isomers), alpha-hexyl cinnamal, benzyl cinnamate, benzyl benzoate, and benzyl salicylate. The applicability of a headspace (HS) autosampler in combination with a gas chromatograph (GC) equipped with a programmable temperature vaporizer (PTV) and a quadrupole mass spectrometry (qMS) detector is explored. By using a headspace sampler, sample preparation is reduced to introducing the sample into the vial. This reduces the analysis time and the experimental errors associated with this step of the analytical process. Two different injection techniques were used: solvent-vent injection and hot-split injection. The first offers a way to improve sensitivity at the same time maintaining the simple headspace instrumentation and it is recommended for compounds at trace levels. The use of a liner packed with Tenax-TA allowed the compounds of interest to be retained during the venting process. The signals obtained when hot-split injection was used allowed quantification of all the compounds according to the thresholds of the European Cosmetics Directive. Monodimensional gas chromatography coupled to a conventional quadrupole mass spectrometry detector was used and the 24 analytes were separated appropriately along a run time of about 12 min. Use of the standard addition procedure as a quantification technique overcame the matrix effect. It should be emphasized that the method showed good precision and accuracy. Furthermore, it is rapid, simple, and--in view of the results--highly suitable for the determination of suspected allergens in different cosmetic products.
Differentially pumped dual linear quadrupole ion trap mass spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Owen, Benjamin C.; Kenttamaa, Hilkka I.
The present disclosure provides a new tandem mass spectrometer and methods of using the same for analyzing charged particles. The differentially pumped dual linear quadrupole ion trap mass spectrometer of the present disclose includes a combination of two linear quadrupole (LQIT) mass spectrometers with differentially pumped vacuum chambers.
Dzhioev, R I; Korenev, V L
2007-07-20
The nuclear quadrupole interaction eliminates the restrictions imposed by hyperfine interaction on the spin coherence of an electron and nuclei in a quantum dot. The strain-induced nuclear quadrupole interaction suppresses the nuclear spin flip and makes possible the zero-field dynamic nuclear polarization in self-organized InP/InGaP quantum dots. The direction of the effective nuclear magnetic field is fixed in space, thus quenching the magnetic depolarization of the electron spin in the quantum dot. The quadrupole interaction suppresses the zero-field electron spin decoherence also for the case of nonpolarized nuclei. These results provide a new vision of the role of the nuclear quadrupole interaction in nanostructures: it elongates the spin memory of the electron-nuclear system.
NASA Astrophysics Data System (ADS)
Dzhioev, R. I.; Korenev, V. L.
2007-07-01
The nuclear quadrupole interaction eliminates the restrictions imposed by hyperfine interaction on the spin coherence of an electron and nuclei in a quantum dot. The strain-induced nuclear quadrupole interaction suppresses the nuclear spin flip and makes possible the zero-field dynamic nuclear polarization in self-organized InP/InGaP quantum dots. The direction of the effective nuclear magnetic field is fixed in space, thus quenching the magnetic depolarization of the electron spin in the quantum dot. The quadrupole interaction suppresses the zero-field electron spin decoherence also for the case of nonpolarized nuclei. These results provide a new vision of the role of the nuclear quadrupole interaction in nanostructures: it elongates the spin memory of the electron-nuclear system.
A -cation control of magnetoelectric quadrupole order in A (TiO)Cu 4(PO4)4(A =Ba ,Sr, and Pb)
NASA Astrophysics Data System (ADS)
Kimura, K.; Toyoda, M.; Babkevich, P.; Yamauchi, K.; Sera, M.; Nassif, V.; Rønnow, H. M.; Kimura, T.
2018-04-01
Ferroic magnetic quadrupole order exhibiting macroscopic magnetoelectric activity is discovered in the novel compound A (TiO ) Cu4(PO4)4 with A = Pb, which is in contrast with antiferroic quadrupole order observed in the isostructural compounds with A = Ba and Sr. Unlike the famous lone-pair stereochemical activity which often triggers ferroelectricity as in PbTiO3, the Pb2 + cation in Pb (TiO ) Cu4(PO4)4 is stereochemically inactive but dramatically alters specific magnetic interactions and consequently switches the quadrupole order from antiferroic to ferroic. Our first-principles calculations uncover a positive correlation between the degree of A -O bond covalency and a stability of the ferroic quadrupole order.
NASA Astrophysics Data System (ADS)
Dehghan, Mehdi; Hajarian, Masoud
2012-08-01
A matrix P is called a symmetric orthogonal if P = P T = P -1. A matrix X is said to be a generalised bisymmetric with respect to P if X = X T = PXP. It is obvious that any symmetric matrix is also a generalised bisymmetric matrix with respect to I (identity matrix). By extending the idea of the Jacobi and the Gauss-Seidel iterations, this article proposes two new iterative methods, respectively, for computing the generalised bisymmetric (containing symmetric solution as a special case) and skew-symmetric solutions of the generalised Sylvester matrix equation ? (including Sylvester and Lyapunov matrix equations as special cases) which is encountered in many systems and control applications. When the generalised Sylvester matrix equation has a unique generalised bisymmetric (skew-symmetric) solution, the first (second) iterative method converges to the generalised bisymmetric (skew-symmetric) solution of this matrix equation for any initial generalised bisymmetric (skew-symmetric) matrix. Finally, some numerical results are given to illustrate the effect of the theoretical results.
Skewness in large-scale structure and non-Gaussian initial conditions
NASA Technical Reports Server (NTRS)
Fry, J. N.; Scherrer, Robert J.
1994-01-01
We compute the skewness of the galaxy distribution arising from the nonlinear evolution of arbitrary non-Gaussian intial conditions to second order in perturbation theory including the effects of nonlinear biasing. The result contains a term identical to that for a Gaussian initial distribution plus terms which depend on the skewness and kurtosis of the initial conditions. The results are model dependent; we present calculations for several toy models. At late times, the leading contribution from the initial skewness decays away relative to the other terms and becomes increasingly unimportant, but the contribution from initial kurtosis, previously overlooked, has the same time dependence as the Gaussian terms. Observations of a linear dependence of the normalized skewness on the rms density fluctuation therefore do not necessarily rule out initially non-Gaussian models. We also show that with non-Gaussian initial conditions the first correction to linear theory for the mean square density fluctuation is larger than for Gaussian models.
Highly Dynamic Anion-Quadrupole Networks in Proteins.
Kapoor, Karan; Duff, Michael R; Upadhyay, Amit; Bucci, Joel C; Saxton, Arnold M; Hinde, Robert J; Howell, Elizabeth E; Baudry, Jerome
2016-11-01
The dynamics of anion-quadrupole (or anion-π) interactions formed between negatively charged (Asp/Glu) and aromatic (Phe) side chains are for the first time computationally characterized in RmlC (Protein Data Bank entry 1EP0 ), a homodimeric epimerase. Empirical force field-based molecular dynamics simulations predict anion-quadrupole pairs and triplets (anion-anion-π and anion-π-π) are formed by the protein during the simulated trajectory, which suggests that the anion-quadrupole interactions may provide a significant contribution to the overall stability of the protein, with an average of -1.6 kcal/mol per pair. Some anion-π interactions are predicted to form during the trajectory, extending the number of anion-quadrupole interactions beyond those predicted from crystal structure analysis. At the same time, some anion-π pairs observed in the crystal structure exhibit marginal stability. Overall, most anion-π interactions alternate between an "on" state, with significantly stabilizing energies, and an "off" state, with marginal or null stabilizing energies. The way proteins possibly compensate for transient loss of anion-quadrupole interactions is characterized in the RmlC aspartate 84-phenylalanine 112 anion-quadrupole pair observed in the crystal structure. A double-mutant cycle analysis of the thermal stability suggests a possible loss of anion-π interactions compensated by variations of hydration of the residues and formation of compensating electrostatic interactions. These results suggest that near-planar anion-quadrupole pairs can exist, sometimes transiently, which may play a role in maintaining the structural stability and function of the protein, in an otherwise very dynamic interplay of a nonbonded interaction network as well as solvent effects.
Arnold-Chiari malformation and nystagmus of skew
Pieh, C.; Gottlob, I.
2000-01-01
The Arnold-Chiari malfomation is typically associated with downbeat nystagmus. Eye movement recordings in two patients with Arnold-Chiari malfomation type 1 showed, in addition to downbeat and gaze evoked nystagmus, intermittent nystagmus of skew. To date this finding has not been reported in association with Arnold-Chiari malfomation. Nystagmus of skew should raise the suspicion of Arnold-Chiari malfomation and prompt sagittal head MRI examination. PMID:10864619
An Adaptive Method for Reducing Clock Skew in an Accumulative Z-Axis Interconnect System
NASA Technical Reports Server (NTRS)
Bolotin, Gary; Boyce, Lee
1997-01-01
This paper will present several methods for adjusting clock skew variations that occur in a n accumulative z-axis interconnect system. In such a system, delay between modules in a function of their distance from one another. Clock distribution in a high-speed system, where clock skew must be kept to a minimum, becomes more challenging when module order is variable before design.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. © 2017 Yufei Gao et al.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning.
Gao, Yufei; Zhou, Yanjie; Zhou, Bing; Shi, Lei; Zhang, Jiacai
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data.
Handling Data Skew in MapReduce Cluster by Using Partition Tuning
Zhou, Yanjie; Zhou, Bing; Shi, Lei
2017-01-01
The healthcare industry has generated large amounts of data, and analyzing these has emerged as an important problem in recent years. The MapReduce programming model has been successfully used for big data analytics. However, data skew invariably occurs in big data analytics and seriously affects efficiency. To overcome the data skew problem in MapReduce, we have in the past proposed a data processing algorithm called Partition Tuning-based Skew Handling (PTSH). In comparison with the one-stage partitioning strategy used in the traditional MapReduce model, PTSH uses a two-stage strategy and the partition tuning method to disperse key-value pairs in virtual partitions and recombines each partition in case of data skew. The robustness and efficiency of the proposed algorithm were tested on a wide variety of simulated datasets and real healthcare datasets. The results showed that PTSH algorithm can handle data skew in MapReduce efficiently and improve the performance of MapReduce jobs in comparison with the native Hadoop, Closer, and locality-aware and fairness-aware key partitioning (LEEN). We also found that the time needed for rule extraction can be reduced significantly by adopting the PTSH algorithm, since it is more suitable for association rule mining (ARM) on healthcare data. PMID:29065568
Aharonov–Anandan quantum phases and Landau quantization associated with a magnetic quadrupole moment
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fonseca, I.C.; Bakke, K., E-mail: kbakke@fisica.ufpb.br
The arising of geometric quantum phases in the wave function of a moving particle possessing a magnetic quadrupole moment is investigated. It is shown that an Aharonov–Anandan quantum phase (Aharonov and Anandan, 1987) can be obtained in the quantum dynamics of a moving particle with a magnetic quadrupole moment. In particular, it is obtained as an analogue of the scalar Aharonov–Bohm effect for a neutral particle (Anandan, 1989). Besides, by confining the quantum particle to a hard-wall confining potential, the dependence of the energy levels on the geometric quantum phase is discussed and, as a consequence, persistent currents can arisemore » from this dependence. Finally, an analogue of the Landau quantization is discussed. -- Highlights: •Scalar Aharonov–Bohm effect for a particle possessing a magnetic quadrupole moment. •Aharonov–Anandan quantum phase for a particle with a magnetic quadrupole moment. •Dependence of the energy levels on the Aharonov–Anandan quantum phase. •Landau quantization associated with a particle possessing a magnetic quadrupole moment.« less
Design of a 6 TeV muon collider
Wang, M-H.; Nosochkov, Y.; Cai, Y.; ...
2016-09-09
Here, a preliminary design of a muon collider ring with the center of mass (CM) energy of 6 TeV is presented. The ring circumference is 6.3 km, and themore » $$\\beta$$ functions at collision point are 1 cm in each plane. The ring linear optics, the non-linear chromaticity compensation in the Interaction Region (IR), and the additional non-linear orthogonal correcting knobs are described. Magnet specifications are based on the maximum pole-tip field of 20T in dipoles and 15T in quadrupoles. Careful compensation of the non-linear chromatic and amplitude dependent effects provide a sufficiently large dynamic aperture for the momentum range of up to $$\\pm$$0.5% without considering magnet errors.« less
Static performance investigation of a skewed-throat multiaxis thrust-vectoring nozzle concept
NASA Technical Reports Server (NTRS)
Wing, David J.
1994-01-01
The static performance of a jet exhaust nozzle which achieves multiaxis thrust vectoring by physically skewing the geometric throat has been characterized in the static test facility of the 16-Foot Transonic Tunnel at NASA Langley Research Center. The nozzle has an asymmetric internal geometry defined by four surfaces: a convergent-divergent upper surface with its ridge perpendicular to the nozzle centerline, a convergent-divergent lower surface with its ridge skewed relative to the nozzle centerline, an outwardly deflected sidewall, and a straight sidewall. The primary goal of the concept is to provide efficient yaw thrust vectoring by forcing the sonic plane (nozzle throat) to form at a yaw angle defined by the skewed ridge of the lower surface contour. A secondary goal is to provide multiaxis thrust vectoring by combining the skewed-throat yaw-vectoring concept with upper and lower pitch flap deflections. The geometric parameters varied in this investigation included lower surface ridge skew angle, nozzle expansion ratio (divergence angle), aspect ratio, pitch flap deflection angle, and sidewall deflection angle. Nozzle pressure ratio was varied from 2 to a high of 11.5 for some configurations. The results of the investigation indicate that efficient, substantial multiaxis thrust vectoring was achieved by the skewed-throat nozzle concept. However, certain control surface deflections destabilized the internal flow field, which resulted in substantial shifts in the position and orientation of the sonic plane and had an adverse effect on thrust-vectoring and weight flow characteristics. By increasing the expansion ratio, the location of the sonic plane was stabilized. The asymmetric design resulted in interdependent pitch and yaw thrust vectoring as well as nonzero thrust-vector angles with undeflected control surfaces. By skewing the ridges of both the upper and lower surface contours, the interdependency between pitch and yaw thrust vectoring may be eliminated and the location of the sonic plane may be further stabilized.
Nonuniform radiation damage in permanent magnet quadrupoles.
Danly, C R; Merrill, F E; Barlow, D; Mariam, F G
2014-08-01
We present data that indicate nonuniform magnetization loss due to radiation damage in neodymium-iron-boron Halbach-style permanent magnet quadrupoles. The proton radiography (pRad) facility at Los Alamos uses permanent-magnet quadrupoles for magnifying lenses, and a system recently commissioned at GSI-Darmsdadt uses permanent magnets for its primary lenses. Large fluences of spallation neutrons can be produced in close proximity to these magnets when the proton beam is, intentionally or unintentionally, directed into the tungsten beam collimators; imaging experiments at LANL's pRad have shown image degradation with these magnetic lenses at proton beam doses lower than those expected to cause damage through radiation-induced reduction of the quadrupole strength alone. We have observed preferential degradation in portions of the permanent magnet quadrupole where the field intensity is highest, resulting in increased high-order multipole components.
Communication: On the isotope anomaly of nuclear quadrupole coupling in molecules
NASA Astrophysics Data System (ADS)
Filatov, Michael; Zou, Wenli; Cremer, Dieter
2012-10-01
The dependence of the nuclear quadrupole coupling constants (NQCC) on the interaction between electrons and a nucleus of finite size is theoretically analyzed. A deviation of the ratio of the NQCCs obtained from two different isotopomers of a molecule from the ratio of the corresponding bare nuclear electric quadrupole moments, known as quadrupole anomaly, is interpreted in terms of the logarithmic derivatives of the electric field gradient at the nuclear site with respect to the nuclear charge radius. Quantum chemical calculations based on a Dirac-exact relativistic methodology suggest that the effect of the changing size of the Au nucleus in different isotopomers can be observed for Au-containing molecules, for which the predicted quadrupole anomaly reaches values of the order of 0.1%. This is experimentally detectable and provides an insight into the charge distribution of non-spherical nuclei.
An Analytical Comparison of the Acoustic Analogy and Kirchhoff Formulation for Moving Surfaces
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.; Farassat, F.
1997-01-01
The Lighthill acoustic analogy, as embodied in the Ffowcs Williams-Hawkings (FW-H) equation, is compared with the Kirchhoff formulation for moving surfaces. A comparison of the two governing equations reveals that the main Kirchhoff advantage (namely nonlinear flow effects are included in the surface integration) is also available to the FW-H method if the integration surface used in the FW-H equation is not assumed impenetrable. The FW-H equation is analytically superior for aeroacoustics because it is based upon the conservation laws of fluid mechanics rather than the wave equation. This means that the FW-H equation is valid even if the integration surface is in the nonlinear region. This is demonstrated numerically in the paper. The Kirchhoff approach can lead to substantial errors if the integration surface is not positioned in the linear region. These errors may be hard to identify. Finally, new metrics based on the Sobolev norm are introduced which may be used to compare input data for both quadrupole noise calculations and Kirchhoff noise predictions.
Electrostatic quadrupole focused particle accelerating assembly with laminar flow beam
Maschke, A.W.
1984-04-16
A charged particle accelerating assembly provided with a predetermined ratio of parametric structural characteristics and with related operating voltages applied to each of its linearly spaced focusing and accelerating quadrupoles, thereby to maintain a particle beam traversing the electrostatic fields of the quadrupoles in the assembly in an essentially laminar flow through the assembly.
Electrostatic quadrupole focused particle accelerating assembly with laminar flow beam
Maschke, Alfred W.
1985-01-01
A charged particle accelerating assembly provided with a predetermined ratio of parametric structural characteristics and with related operating voltages applied to each of its linearly spaced focusing and accelerating quadrupoles, thereby to maintain a particle beam traversing the electrostatic fields of the quadrupoles in the assembly in an essentially laminar flow throughout the assembly.
Thermodynamic efficiency of solar concentrators.
Shatz, Narkis; Bortz, John; Winston, Roland
2010-04-26
The optical thermodynamic efficiency is a comprehensive metric that takes into account all loss mechanisms associated with transferring flux from the source to the target phase space, which may include losses due to inadequate design, non-ideal materials, fabrication errors, and less than maximal concentration. We discuss consequences of Fermat's principle of geometrical optics and review étendue dilution and optical loss mechanisms associated with nonimaging concentrators. We develop an expression for the optical thermodynamic efficiency which combines the first and second laws of thermodynamics. As such, this metric is a gold standard for evaluating the performance of nonimaging concentrators. We provide examples illustrating the use of this new metric for concentrating photovoltaic systems for solar power applications, and in particular show how skewness mismatch limits the attainable optical thermodynamic efficiency.
Sampling problems: The small scale structure of precipitation
NASA Technical Reports Server (NTRS)
Crane, R. K.
1981-01-01
The quantitative measurement of precipitation characteristics for any area on the surface of the Earth is not an easy task. Precipitation is rather variable in both space and time, and the distribution of surface rainfall data given location typically is substantially skewed. There are a number of precipitation process at work in the atmosphere, and few of them are well understood. The formal theory on sampling and estimating precipitation appears considerably deficient. Little systematic attention is given to nonsampling errors that always arise in utilizing any measurement system. Although the precipitation measurement problem is an old one, it continues to be one that is in need of systematic and careful attention. A brief history of the presently competing measurement technologies should aid us in understanding the problem inherent in this measurement task.
The Columbia University Sub-micron Charged Particle Beam
Randers-Pehrson, Gerhard; Johnson, Gary W.; Marino, Stephen A.; Xu, Yanping; Dymnikov, Alexander D.; Brenner, David J.
2009-01-01
A lens system consisting of two electrostatic quadrupole triplets has been designed and constructed at the Radiological Research Accelerator Facility (RARAF) of Columbia University. The lens system has been used to focus 6-MeV 4He ions to a beam spot in air with a diameter of 0.8 µm. The quadrupole electrodes can withstand voltages high enough to focus 4He ions up to 10 MeV and protons up to 5 MeV. The quadrupole triplet design is novel in that alignment is made through precise construction and the relative strengths of the quadrupoles are accomplished by the lengths of the elements, so that the magnitudes of the voltages required for focusing are nearly identical. The insulating sections between electrodes have had ion implantation to improve the voltage stability of the lens. The lens design employs Russian symmetry for the quadrupole elements. PMID:20161365
Theory of Nuclear Quadrupole Interactions in the Chemical Ferromagnet p-Cl-Ph-CH-N=TEMPO
NASA Astrophysics Data System (ADS)
Briere, Tina M.; Jeong, Junho; Sahoo, N.; Das, T. P.; Ohira, S.; Nishiyama, K.; Nagamine, K.
2002-03-01
The study(Junho Jeong et al., Physica B 289-290, 132 (2000).) of the magnetic hyperfine properties of chemical ferromagnets provides valuable information about the electronic spin distributions in the individual molecules. Insights into the electronic charge distributions and their anisotropy can be obtained from electric quadrupole interactions for the different nuclei in these systems. For this purpose we have studied the nuclear quadrupole interactions(T. P. Das and E. L. Hahn "Nuclear Quadrupole Resonance Spectroscopy", Academic Press Inc., New York, 1958.) for the 14^N nuclei in the NO group and the bridge nitrogen, the 17^O nucleus in the NO group and the 35^Cl nucleus in the p-Cl-Ph-CH-N=TEMPO system both by itself and in the presence of trapped μ and Mu. Comparison will be made between our results and available experimental quadrupole coupling constant (e^2qQ) and asymmetry parameter (η) data.
On Time/Space Aggregation of Fine-Scale Error Estimates (Invited)
NASA Astrophysics Data System (ADS)
Huffman, G. J.
2013-12-01
Estimating errors inherent in fine time/space-scale satellite precipitation data sets is still an on-going problem and a key area of active research. Complicating features of these data sets include the intrinsic intermittency of the precipitation in space and time and the resulting highly skewed distribution of precipitation rates. Additional issues arise from the subsampling errors that satellites introduce, the errors due to retrieval algorithms, and the correlated error that retrieval and merger algorithms sometimes introduce. Several interesting approaches have been developed recently that appear to make progress on these long-standing issues. At the same time, the monthly averages over 2.5°x2.5° grid boxes in the Global Precipitation Climatology Project (GPCP) Satellite-Gauge (SG) precipitation data set follow a very simple sampling-based error model (Huffman 1997) with coefficients that are set using coincident surface and GPCP SG data. This presentation outlines the unsolved problem of how to aggregate the fine-scale errors (discussed above) to an arbitrary time/space averaging volume for practical use in applications, reducing in the limit to simple Gaussian expressions at the monthly 2.5°x2.5° scale. Scatter diagrams with different time/space averaging show that the relationship between the satellite and validation data improves due to the reduction in random error. One of the key, and highly non-linear, issues is that fine-scale estimates tend to have large numbers of cases with points near the axes on the scatter diagram (one of the values is exactly or nearly zero, while the other value is higher). Averaging 'pulls' the points away from the axes and towards the 1:1 line, which usually happens for higher precipitation rates before lower rates. Given this qualitative observation of how aggregation affects error, we observe that existing aggregation rules, such as the Steiner et al. (2003) power law, only depend on the aggregated precipitation rate. Is this sufficient, or is it necessary to aggregate the precipitation error estimates across the time/space data cube used for averaging? At least for small time/space data cubes it would seem that the detailed variables that affect each precipitation error estimate in the aggregation, such as sensor type, land/ocean surface type, convective/stratiform type, and so on, drive variations that must be accounted for explicitly.
Differential models of twin correlations in skew for body-mass index (BMI).
Tsang, Siny; Duncan, Glen E; Dinescu, Diana; Turkheimer, Eric
2018-01-01
Body Mass Index (BMI), like most human phenotypes, is substantially heritable. However, BMI is not normally distributed; the skew appears to be structural, and increases as a function of age. Moreover, twin correlations for BMI commonly violate the assumptions of the most common variety of the classical twin model, with the MZ twin correlation greater than twice the DZ correlation. This study aimed to decompose twin correlations for BMI using more general skew-t distributions. Same sex MZ and DZ twin pairs (N = 7,086) from the community-based Washington State Twin Registry were included. We used latent profile analysis (LPA) to decompose twin correlations for BMI into multiple mixture distributions. LPA was performed using the default normal mixture distribution and the skew-t mixture distribution. Similar analyses were performed for height as a comparison. Our analyses are then replicated in an independent dataset. A two-class solution under the skew-t mixture distribution fits the BMI distribution for both genders. The first class consists of a relatively normally distributed, highly heritable BMI with a mean in the normal range. The second class is a positively skewed BMI in the overweight and obese range, with lower twin correlations. In contrast, height is normally distributed, highly heritable, and is well-fit by a single latent class. Results in the replication dataset were highly similar. Our findings suggest that two distinct processes underlie the skew of the BMI distribution. The contrast between height and weight is in accord with subjective psychological experience: both are under obvious genetic influence, but BMI is also subject to behavioral control, whereas height is not.
Mangold, Alexandra; Trenkwalder, Katharina; Ringler, Max; Hödl, Walter; Ringler, Eva
2015-09-03
Reproductive skew, the uneven distribution of reproductive success among individuals, is a common feature of many animal populations. Several scenarios have been proposed to favour either high or low levels of reproductive skew. Particularly a male-biased operational sex ratio and the asynchronous arrival of females is expected to cause high variation in reproductive success among males. Recently it has been suggested that the type of benefits provided by males (fixed vs. dilutable) could also strongly impact individual mating patterns, and thereby affecting reproductive skew. We tested this hypothesis in Hyalinobatrachium valerioi, a Neotropical glass frog with prolonged breeding and paternal care. We monitored and genetically sampled a natural population in southwestern Costa Rica during the breeding season in 2012 and performed parentage analysis of adult frogs and tadpoles to investigate individual mating frequencies, possible mating preferences, and estimate reproductive skew in males and females. We identified a polygamous mating system, where high proportions of males (69 %) and females (94 %) reproduced successfully. The variance in male mating success could largely be attributed to differences in time spent calling at the reproductive site, but not to body size or relatedness. Female H. valerioi were not choosy and mated indiscriminately with available males. Our findings support the hypothesis that dilutable male benefits - such as parental care - can favour female polyandry and maintain low levels of reproductive skew among males within a population, even in the presence of direct male-male competition and a highly male-biased operational sex ratio. We hypothesize that low male reproductive skew might be a general characteristic in prolonged breeders with paternal care.
Shoari, Niloofar; Dubé, Jean-Sébastien; Chenouri, Shoja'eddin
2015-11-01
In environmental studies, concentration measurements frequently fall below detection limits of measuring instruments, resulting in left-censored data. Some studies employ parametric methods such as the maximum likelihood estimator (MLE), robust regression on order statistic (rROS), and gamma regression on order statistic (GROS), while others suggest a non-parametric approach, the Kaplan-Meier method (KM). Using examples of real data from a soil characterization study in Montreal, we highlight the need for additional investigations that aim at unifying the existing literature. A number of studies have examined this issue; however, those considering data skewness and model misspecification are rare. These aspects are investigated in this paper through simulations. Among other findings, results show that for low skewed data, the performance of different statistical methods is comparable, regardless of the censoring percentage and sample size. For highly skewed data, the performance of the MLE method under lognormal and Weibull distributions is questionable; particularly, when the sample size is small or censoring percentage is high. In such conditions, MLE under gamma distribution, rROS, GROS, and KM are less sensitive to skewness. Related to model misspecification, MLE based on lognormal and Weibull distributions provides poor estimates when the true distribution of data is misspecified. However, the methods of rROS, GROS, and MLE under gamma distribution are generally robust to model misspecifications regardless of skewness, sample size, and censoring percentage. Since the characteristics of environmental data (e.g., type of distribution and skewness) are unknown a priori, we suggest using MLE based on gamma distribution, rROS and GROS. Copyright © 2015 Elsevier Ltd. All rights reserved.
The skew ray ambiguity in the analysis of videokeratoscopic data.
Iskander, D Robert; Davis, Brett A; Collins, Michael J
2007-05-01
Skew ray ambiguity is present in most videokeratoscopic measurements when azimuthal components of the corneal curvature are not taken into account. There have been some reported studies based on theoretical predictions and measured test surfaces suggesting that skew ray ambiguity is significant for highly deformed corneas or decentered corneal measurements. However, the effect of skew ray ambiguity in ray tracing through videokeratoscopic data has not been studied in depth. We have evaluated the significance of the skew ray ambiguity and its effect on the analyzed corneal optics. This has been achieved by devising a procedure in which we compared the corneal wavefront aberrations estimated from 3D ray tracing with those determined from 2D (meridional based) estimates of the refractive power. The latter was possible due to recently developed concept of refractive Zernike power polynomials which links the refractive power domain with that of the wavefront. Simulated corneal surfaces as well as data from a range of corneas (from two different Placido disk-based videokeratoscopes) were used to find the limit at which the difference in estimated corneal wavefronts (or the corresponding refractive powers) would have clinical significance (e.g., equivalent to 0.125 D or more). The inclusion/exclusion of the skew ray in the analyses showed some differences in the results. However, the proposed procedure showed clinically significant differences only for highly deformed corneas and only for large corneal diameters. For the overwhelming majority of surfaces, the skew ray ambiguity is not a clinically significant issue in the analysis of the videokeratoscopic data indicating that the meridional processing such as that encountered in calculation of the refractive power maps is adequate.
On river-floodplain interaction and hydrograph skewness
NASA Astrophysics Data System (ADS)
Fleischmann, Ayan S.; Paiva, Rodrigo C. D.; Collischonn, Walter; Sorribas, Mino V.; Pontes, Paulo R. M.
2016-10-01
Understanding hydrological processes occurring within a basin by looking at its outlet hydrograph can improve and foster comprehension of ungauged regions. In this context, we present an extensive examination of the roles that floodplains play on driving hydrograph shapes. Observations of many river hydrographs with large floodplain influence are carried out and indicate that a negative skewness of the hydrographs is present among many of them. Through a series of numerical experiments and analytical reasoning, we show how the relationship between flood wave celerity and discharge in such systems is responsible for determining the hydrograph shapes. The more water inundates the floodplains upstream of the observed point, the more negatively skewed is the observed hydrograph. A case study is performed in the Amazon River Basin, where major rivers with large floodplain attenuation (e.g., Purus, Madeira, and Juruá) are identified with higher negative skewness in the respective hydrographs. Finally, different wetland types could be distinguished by using this feature, e.g., wetlands maintained by endogenous processes, from wetlands governed by overbank flow (along river floodplains). A metric of hydrograph skewness was developed to quantify this effect, based on the time derivative of discharge. Together with the skewness concept, it may be used in other studies concerning the relevance of floodplain attenuation in large, ungauged rivers, where remote sensing data (e.g., satellite altimetry) can be very useful.
Gentilini, Davide; Garagnani, Paolo; Pisoni, Serena; Bacalini, Maria Giulia; Calzari, Luciano; Mari, Daniela; Vitale, Giovanni; Franceschi, Claudio; Di Blasio, Anna Maria
2015-08-01
In this study we applied a new analytical strategy to investigate the relations between stochastic epigenetic mutations (SEMs) and aging. We analysed methylation levels through the Infinium HumanMethylation27 and HumanMethylation450 BeadChips in a population of 178 subjects ranging from 3 to 106 years. For each CpG probe, epimutated subjects were identified as the extreme outliers with methylation level exceeding three times interquartile ranges the first quartile (Q1-(3 x IQR)) or the third quartile (Q3+(3 x IQR)). We demonstrated that the number of SEMs was low in childhood and increased exponentially during aging. Using the HUMARA method, skewing of X chromosome inactivation (XCI) was evaluated in heterozygotes women. Multivariate analysis indicated a significant correlation between log(SEMs) and degree of XCI skewing after adjustment for age (β = 0.41; confidence interval: 0.14, 0.68; p-value = 0.0053). The PATH analysis tested the complete model containing the variables: skewing of XCI, age, log(SEMs) and overall CpG methylation. After adjusting for the number of epimutations we failed to confirm the well reported correlation between skewing of XCI and aging. This evidence might suggest that the known correlation between XCI skewing and aging could not be a direct association but mediated by the number of SEMs.
NASA Astrophysics Data System (ADS)
McEvoy, Erica L.
Stochastic differential equations are becoming a popular tool for modeling the transport and acceleration of cosmic rays in the heliosphere. In diffusive shock acceleration, cosmic rays diffuse across a region of discontinuity where the up- stream diffusion coefficient abruptly changes to the downstream value. Because the method of stochastic integration has not yet been developed to handle these types of discontinuities, I utilize methods and ideas from probability theory to develop a conceptual framework for the treatment of such discontinuities. Using this framework, I then produce some simple numerical algorithms that allow one to incorporate and simulate a variety of discontinuities (or boundary conditions) using stochastic integration. These algorithms were then modified to create a new algorithm which incorporates the discontinuous change in diffusion coefficient found in shock acceleration (known as Skew Brownian Motion). The originality of this algorithm lies in the fact that it is the first of its kind to be statistically exact, so that one obtains accuracy without the use of approximations (other than the machine precision error). I then apply this algorithm to model the problem of diffusive shock acceleration, modifying it to incorporate the additional effect of the discontinuous flow speed profile found at the shock. A steady-state solution is obtained that accurately simulates this phenomenon. This result represents a significant improvement over previous approximation algorithms, and will be useful for the simulation of discontinuous diffusion processes in other fields, such as biology and finance.
NASA Astrophysics Data System (ADS)
Csillik, O.; Evans, I. S.; Drăguţ, L.
2015-03-01
Automated procedures are developed to alleviate long tails in frequency distributions of morphometric variables. They minimize the skewness of slope gradient frequency distributions, and modify the kurtosis of profile and plan curvature distributions toward that of the Gaussian (normal) model. Box-Cox (for slope) and arctangent (for curvature) transformations are tested on nine digital elevation models (DEMs) of varying origin and resolution, and different landscapes, and shown to be effective. Resulting histograms are illustrated and show considerable improvements over those for previously recommended slope transformations (sine, square root of sine, and logarithm of tangent). Unlike previous approaches, the proposed method evaluates the frequency distribution of slope gradient values in a given area and applies the most appropriate transform if required. Sensitivity of the arctangent transformation is tested, showing that Gaussian-kurtosis transformations are acceptable also in terms of histogram shape. Cube root transformations of curvatures produced bimodal histograms. The transforms are applicable to morphometric variables and many others with skewed or long-tailed distributions. By avoiding long tails and outliers, they permit parametric statistics such as correlation, regression and principal component analyses to be applied, with greater confidence that requirements for linearity, additivity and even scatter of residuals (constancy of error variance) are likely to be met. It is suggested that such transformations should be routinely applied in all parametric analyses of long-tailed variables. Our Box-Cox and curvature automated transformations are based on a Python script, implemented as an easy-to-use script tool in ArcGIS.
Compensation of orbit distortion due to quadrupole motion using feed-forward control at KEK ATF
NASA Astrophysics Data System (ADS)
Bett, D. R.; Charrondière, C.; Patecki, M.; Pfingstner, J.; Schulte, D.; Tomás, R.; Jeremie, A.; Kubo, K.; Kuroda, S.; Naito, T.; Okugi, T.; Tauchi, T.; Terunuma, N.; Burrows, P. N.; Christian, G. B.; Perry, C.
2018-07-01
The high luminosity requirement for a future linear collider sets a demanding limit on the beam quality at the Interaction Point (IP). One potential source of luminosity loss is the motion of the ground itself. The resulting misalignments of the quadrupole magnets cause distortions to the beam orbit and hence an increase in the beam emittance. This paper describes a technique for compensating this orbit distortion by using seismometers to monitor the misalignment of the quadrupole magnets in real-time. The first demonstration of the technique was achieved at the Accelerator Test Facility (ATF) at KEK in Japan. The feed-forward system consisted of a seismometer-based quadrupole motion monitoring system, an FPGA-based feed-forward processor and a stripline kicker plus associated electronics. Through the application of a kick calculated from the position of a single quadruple, the system was able to remove about 80% of the component of the beam jitter that was correlated to the motion of the quadrupole. As a significant fraction of the orbit jitter in the ATF final focus is due to sources other than quadrupole misalignment, this amounted to an approximately 15% reduction in the absolute beam jitter.
A fuzzy logic-based model for noise control at industrial workplaces.
Aluclu, I; Dalgic, A; Toprak, Z F
2008-05-01
Ergonomics is a broad science encompassing the wide variety of working conditions that can affect worker comfort and health, including factors such as lighting, noise, temperature, vibration, workstation design, tool design, machine design, etc. This paper describes noise-human response and a fuzzy logic model developed by comprehensive field studies on noise measurements (including atmospheric parameters) and control measures. The model has two subsystems constructed on noise reduction quantity in dB. The first subsystem of the fuzzy model depending on 549 linguistic rules comprises acoustical features of all materials used in any workplace. Totally 984 patterns were used, 503 patterns for model development and the rest 481 patterns for testing the model. The second subsystem deals with atmospheric parameter interactions with noise and has 52 linguistic rules. Similarly, 94 field patterns were obtained; 68 patterns were used for training stage of the model and the rest 26 patterns for testing the model. These rules were determined by taking into consideration formal standards, experiences of specialists and the measurements patterns. The results of the model were compared with various statistics (correlation coefficients, max-min, standard deviation, average and coefficient of skewness) and error modes (root mean square error and relative error). The correlation coefficients were significantly high, error modes were quite low and the other statistics were very close to the data. This statement indicates the validity of the model. Therefore, the model can be used for noise control in any workplace and helpful to the designer in planning stage of a workplace.
NASA Astrophysics Data System (ADS)
Mohamed, Abdel-Baset A.
2017-10-01
An analytical solution of the master equation that describes a superconducting cavity containing two coupled superconducting charge qubits is obtained. Quantum-mechanical correlations based on Wigner-Yanase skew information, as local quantum uncertainty and uncertainty-induced quantum non-locality, are compared to the concurrence under the effects of the phase decoherence. Local quantum uncertainty exhibits sudden changes during its time evolution and revival process. Sudden death and sudden birth occur only for entanglement, depending on the initial state of the two coupled charge qubits, while the correlations of skew information does not vanish. The quantum correlations of skew information are found to be sensitive to the dephasing rate, the photons number in the cavity, the interaction strength between the two qubits, and the qubit distribution angle of the initial state. With a proper initial state, the stationary correlation of the skew information has a non-zero stationary value for a long time interval under the phase decoherence, that it may be useful in quantum information and computation processes.
On the Origin of Protein Superfamilies and Superfolds
NASA Astrophysics Data System (ADS)
Magner, Abram; Szpankowski, Wojciech; Kihara, Daisuke
2015-02-01
Distributions of protein families and folds in genomes are highly skewed, having a small number of prevalent superfamiles/superfolds and a large number of families/folds of a small size. Why are the distributions of protein families and folds skewed? Why are there only a limited number of protein families? Here, we employ an information theoretic approach to investigate the protein sequence-structure relationship that leads to the skewed distributions. We consider that protein sequences and folds constitute an information theoretic channel and computed the most efficient distribution of sequences that code all protein folds. The identified distributions of sequences and folds are found to follow a power law, consistent with those observed for proteins in nature. Importantly, the skewed distributions of sequences and folds are suggested to have different origins: the skewed distribution of sequences is due to evolutionary pressure to achieve efficient coding of necessary folds, whereas that of folds is based on the thermodynamic stability of folds. The current study provides a new information theoretic framework for proteins that could be widely applied for understanding protein sequences, structures, functions, and interactions.
A strategy to load balancing for non-connectivity MapReduce job
NASA Astrophysics Data System (ADS)
Zhou, Huaping; Liu, Guangzong; Gui, Haixia
2017-09-01
MapReduce has been widely used in large scale and complex datasets as a kind of distributed programming model. Original Hash partitioning function in MapReduce often results the problem of data skew when data distribution is uneven. To solve the imbalance of data partitioning, we proposes a strategy to change the remaining partitioning index when data is skewed. In Map phase, we count the amount of data which will be distributed to each reducer, then Job Tracker monitor the global partitioning information and dynamically modify the original partitioning function according to the data skew model, so the Partitioner can change the index of these partitioning which will cause data skew to the other reducer that has less load in the next partitioning process, and can eventually balance the load of each node. Finally, we experimentally compare our method with existing methods on both synthetic and real datasets, the experimental results show our strategy can solve the problem of data skew with better stability and efficiency than Hash method and Sampling method for non-connectivity MapReduce task.
Kilian, Reinhold; Matschinger, Herbert; Löeffler, Walter; Roick, Christiane; Angermeyer, Matthias C
2002-03-01
Transformation of the dependent cost variable is often used to solve the problems of heteroscedasticity and skewness in linear ordinary least square regression of health service cost data. However, transformation may cause difficulties in the interpretation of regression coefficients and the retransformation of predicted values. The study compares the advantages and disadvantages of different methods to estimate regression based cost functions using data on the annual costs of schizophrenia treatment. Annual costs of psychiatric service use and clinical and socio-demographic characteristics of the patients were assessed for a sample of 254 patients with a diagnosis of schizophrenia (ICD-10 F 20.0) living in Leipzig. The clinical characteristics of the participants were assessed by means of the BPRS 4.0, the GAF, and the CAN for service needs. Quality of life was measured by WHOQOL-BREF. A linear OLS regression model with non-parametric standard errors, a log-transformed OLS model and a generalized linear model with a log-link and a gamma distribution were used to estimate service costs. For the estimation of robust non-parametric standard errors, the variance estimator by White and a bootstrap estimator based on 2000 replications were employed. Models were evaluated by the comparison of the R2 and the root mean squared error (RMSE). RMSE of the log-transformed OLS model was computed with three different methods of bias-correction. The 95% confidence intervals for the differences between the RMSE were computed by means of bootstrapping. A split-sample-cross-validation procedure was used to forecast the costs for the one half of the sample on the basis of a regression equation computed for the other half of the sample. All three methods showed significant positive influences of psychiatric symptoms and met psychiatric service needs on service costs. Only the log- transformed OLS model showed a significant negative impact of age, and only the GLM shows a significant negative influences of employment status and partnership on costs. All three models provided a R2 of about.31. The Residuals of the linear OLS model revealed significant deviances from normality and homoscedasticity. The residuals of the log-transformed model are normally distributed but still heteroscedastic. The linear OLS model provided the lowest prediction error and the best forecast of the dependent cost variable. The log-transformed model provided the lowest RMSE if the heteroscedastic bias correction was used. The RMSE of the GLM with a log link and a gamma distribution was higher than those of the linear OLS model and the log-transformed OLS model. The difference between the RMSE of the linear OLS model and that of the log-transformed OLS model without bias correction was significant at the 95% level. As result of the cross-validation procedure, the linear OLS model provided the lowest RMSE followed by the log-transformed OLS model with a heteroscedastic bias correction. The GLM showed the weakest model fit again. None of the differences between the RMSE resulting form the cross- validation procedure were found to be significant. The comparison of the fit indices of the different regression models revealed that the linear OLS model provided a better fit than the log-transformed model and the GLM, but the differences between the models RMSE were not significant. Due to the small number of cases in the study the lack of significance does not sufficiently proof that the differences between the RSME for the different models are zero and the superiority of the linear OLS model can not be generalized. The lack of significant differences among the alternative estimators may reflect a lack of sample size adequate to detect important differences among the estimators employed. Further studies with larger case number are necessary to confirm the results. Specification of an adequate regression models requires a careful examination of the characteristics of the data. Estimation of standard errors and confidence intervals by nonparametric methods which are robust against deviations from the normal distribution and the homoscedasticity of residuals are suitable alternatives to the transformation of the skew distributed dependent variable. Further studies with more adequate case numbers are needed to confirm the results.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Das, Santanu; Souradeep, Tarun, E-mail: santanud@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in
2015-05-01
A number of studies of WMAP and Planck claimed the low multipole (specially quadrupole) power deficiency in CMB power spectrum. Anomaly in the orientations of the low multipoles have also been claimed. There is a possibility that the power deficiency at low multipoles may not be of primordial origin and is only an observation artifact coming from the scan procedure adapted in the WMAP or Planck satellites. Therefore, it is always important to investigate all the observational artifacts that can mimic them. The CMB dipole which is much higher than the quadrupole can leak to the higher multipoles due tomore » the non-symmetric beam shape of the WMAP or Planck. We observe that a non-negligible amount of power from the dipole can get transferred to the quadrupole and the higher multipoles due to the non-symmetric beam shapes and contaminate the observed measurements. The orientation of the quadrupole generated by this power transfer is surprisingly very close to the quadrupole observed from the WMAP and Planck maps. However, our analysis shows that the orientation of the quadrupole can not be explained using only the dipole power leakage. In this paper we calculate the amount of quadrupole power leakage for different WMAP bands. For Planck we present the results in terms of upper limits on asymmetric beam parameters that can lead to significant amount of power leakage.« less
Hoenner, Xavier; Whiting, Scott D.; Hindell, Mark A.; McMahon, Clive R.
2012-01-01
Accurately quantifying animals’ spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68th percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student’s t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2±2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes. PMID:22808241
Prototyping and Characterization of an Adjustable Skew Angle Single Gimbal Control Moment Gyroscope
2015-03-01
performance, and an analysis of the test results is provided. In addition to the standard battery of CMG performance tests that were planned, a...objectives for this new CMG is to provide comparable performance to the Andrews CMGs, the values in Table 1 will be used for output torque comparison...essentially fixed at 53.4°. This specific skew angle value is not the problem, as this is one commonly used CMG skew angle for satellite systems. The real
A note on `Analysis of gamma-ray burst duration distribution using mixtures of skewed distributions'
NASA Astrophysics Data System (ADS)
Kwong, Hok Shing; Nadarajah, Saralees
2018-01-01
Tarnopolski [Monthly Notices of the Royal Astronomical Society, 458 (2016) 2024-2031] analysed data sets on gamma-ray burst durations using skew distributions. He showed that the best fits are provided by two skew normal and three Gaussian distributions. Here, we suggest other distributions, including some that are heavy tailed. At least one of these distributions is shown to provide better fits than those considered in Tarnopolski. Five criteria are used to assess best fits.
Permanent magnet edge-field quadrupole
Tatchyn, R.O.
1997-01-21
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis. 10 figs.
Permanent magnet edge-field quadrupole
Tatchyn, Roman O.
1997-01-01
Planar permanent magnet edge-field quadrupoles for use in particle accelerating machines and in insertion devices designed to generate spontaneous or coherent radiation from moving charged particles are disclosed. The invention comprises four magnetized rectangular pieces of permanent magnet material with substantially similar dimensions arranged into two planar arrays situated to generate a field with a substantially dominant quadrupole component in regions close to the device axis.
Investigating a Quadrant Surface Coil Array for NQR Remote Sensing
2014-10-23
UNCLASSIFIED 1 Abstract—this paper is on the design and fabrication of a surface coil array in a quadrant layout for NQR (Nuclear Quadrupole...coupling and SNR (Signal-to-Noise Ratio) at standoff distances perpendicular from each coil. Index Terms— Nuclear Quadrupole Resonance, NQR ...Coil Array, probe, Nuclear Magnetic Resonance, tuning, decoupling, RLC, mutual coupling, RLC I. INTRODUCTION N Nuclear quadrupole resonance ( NQR
Chemical (knight) shift distortions of quadrupole-split deuteron powder spectra in solids
NASA Astrophysics Data System (ADS)
Torgeson, D. R.; Schoenberger, R. J.; Barnes, R. G.
In strong magnetic fields (e.g., 8 Tesla) anisotropy of the shift tensor (chemical or Knight shift) can alter the spacings of the features of quadrupole-split deuteron spectra of polycrystalline samples. Analysis of powder spectra yields both correct quadrupole coupling and symmetry parameters and all the components of the shift tensor. Synthetic and experimental examples are given to illustrate such behavior.
Energetic ion mass analysis using a radio-frequency quadrupole filter.
Medley, S S
1978-06-01
In conventional applications of the radio-frequency quadrupole mass analyzer, the ion injection energy is usually limited to less than the order of 100 eV due to constraints on the dimensions and power supply of the device. However, requirements often arise, for example in fusion plasma ion diagnostics, for mass analysis of much more energetic ions. A technique easily adaptable to any conventional quadrupole analyzer which circumvents the limitation on injection energy is documented in this paper. Briefly, a retarding potential applied to the pole assembly is shown to facilitate mass analysis of multikiloelectron volt ions without altering the salient characteristics of either the quadrupole filter or the ion beam.
Searching the Force Field Electrostatic Multipole Parameter Space.
Jakobsen, Sofie; Jensen, Frank
2016-04-12
We show by tensor decomposition analyses that the molecular electrostatic potential for amino acid peptide models has an effective rank less than twice the number of atoms. This rank indicates the number of parameters that can be derived from the electrostatic potential in a statistically significant way. Using this as a guideline, we investigate different strategies for deriving a reduced set of atomic charges, dipoles, and quadrupoles capable of reproducing the reference electrostatic potential with a low error. A full combinatorial search of selected parameter subspaces for N-methylacetamide and a cysteine peptide model indicates that there are many different parameter sets capable of providing errors close to that of the global minimum. Among the different reduced multipole parameter sets that have low errors, there is consensus that atoms involved in π-bonding require higher order multipole moments. The possible correlation between multipole parameters is investigated by exhaustive searches of combinations of up to four parameters distributed in all possible ways on all possible atomic sites. These analyses show that there is no advantage in considering combinations of multipoles compared to a simple approach where the importance of each multipole moment is evaluated sequentially. When combined with possible weighting factors related to the computational efficiency of each type of multipole moment, this may provide a systematic strategy for determining a computational efficient representation of the electrostatic component in force field calculations.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bodin, A.; Laloo, R.; Abeilhou, P.
2013-09-15
We have developed an energy-filtering device coupled to a quadrupole mass spectrometer to deposit ionized molecules on surfaces with controlled energy in ultra high vacuum environment. Extensive numerical simulations as well as direct measurements show that the ion beam flying out of a quadrupole exhibits a high-energy tail decreasing slowly up to several hundred eV. This energy distribution renders impossible any direct soft-landing deposition of molecular ions. To remove this high-energy tail by energy filtering, a 127° electrostatic sector and a specific triplet lenses were designed and added after the last quadrupole of a triple quadrupole mass spectrometer. The resultsmore » obtained with this energy-filtering device show clearly the elimination of the high-energy tail. The ion beam that impinges on the sample surface satisfies now the soft-landing criterion for molecular ions, opening new research opportunities in the numerous scientific domains involving charges adsorbed on insulating surfaces.« less
NASA Technical Reports Server (NTRS)
Arkin, C. Richard; Ottens, Andrew K.; Diaz, Jorge A.; Griffin, Timothy P.; Follestein, Duke; Adams, Fredrick; Steinrock, T. (Technical Monitor)
2001-01-01
For Space Shuttle launch safety, there is a need to monitor the concentration Of H2, He, O2, and Ar around the launch vehicle. Currently a large mass spectrometry system performs this task, using long transport lines to draw in samples. There is great interest in replacing this stationary system with several miniature, portable, rugged mass spectrometers which act as point sensors which can be placed at the sampling point. Five commercial and two non-commercial analyzers are evaluated. The five commercial systems include the Leybold Inficon XPR-2 linear quadrupole, the Stanford Research (SRS-100) linear quadrupole, the Ferran linear quadrupole array, the ThermoQuest Polaris-Q quadrupole ion trap, and the IonWerks Time-of-Flight (TOF). The non-commercial systems include a compact double focusing sector (CDFMS) developed at the University of Minnesota, and a quadrupole ion trap (UF-IT) developed at the University of Florida.
NASA Astrophysics Data System (ADS)
Geib, Timon; Sleno, Lekha; Hall, Rabea A.; Stokes, Caroline S.; Volmer, Dietrich A.
2016-08-01
We describe a systematic comparison of high and low resolution LC-MS/MS assays for quantification of 25-hydroxyvitamin D3 in human serum. Identical sample preparation, chromatography separations, electrospray ionization sources, precursor ion selection, and ion activation were used; the two assays differed only in the implemented final mass analyzer stage; viz. high resolution quadrupole-quadrupole-time-of-flight (QqTOF) versus low resolution triple quadrupole instruments. The results were assessed against measured concentration levels from a routine clinical chemiluminescence immunoassay. Isobaric interferences prevented the simple use of TOF-MS spectra for extraction of accurate masses and necessitated the application of collision-induced dissociation on the QqTOF platform. The two mass spectrometry assays provided very similar analytical figures of merit, reflecting the lack of relevant isobaric interferences in the MS/MS domain, and were successfully applied to determine the levels of 25-hydroxyvitamin D for patients with chronic liver disease.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Chlachidze, G.; et al.
2016-08-30
The US LHC Accelerator Research Program (LARP) and CERN combined their efforts in developing Nb3Sn magnets for the High-Luminosity LHC upgrade. The ultimate goal of this collaboration is to fabricate large aperture Nb3Sn quadrupoles for the LHC interaction regions (IR). These magnets will replace the present 70 mm aperture NbTi quadrupole triplets for expected increase of the LHC peak luminosity by a factor of 5. Over the past decade LARP successfully fabricated and tested short and long models of 90 mm and 120 mm aperture Nb3Sn quadrupoles. Recently the first short model of 150 mm diameter quadrupole MQXFS was builtmore » with coils fabricated both by the LARP and CERN. The magnet performance was tested at Fermilab’s vertical magnet test facility. This paper reports the test results, including the quench training at 1.9 K, ramp rate and temperature dependence studies.« less
DOT National Transportation Integrated Search
2013-07-01
Many highway bridges are skewed and their behavior and corresponding design analysis need to be furthered to fully accomplish design objectives. This project used physical-test and detailed finite element analysis to better understand the behavior of...
LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE ...
LOOKING WEST, BETWEEN READING DEPOT BRIDGE AND SKEW ARCH BRIDGE (HAER No. PA-116). - Philadelphia & Reading Railroad, Reading Depot Bridge, North Sixth Street at Woodward Street, Reading, Berks County, PA
Dichotomisation using a distributional approach when the outcome is skewed.
Sauzet, Odile; Ofuya, Mercy; Peacock, Janet L
2015-04-24
Dichotomisation of continuous outcomes has been rightly criticised by statisticians because of the loss of information incurred. However to communicate a comparison of risks, dichotomised outcomes may be necessary. Peacock et al. developed a distributional approach to the dichotomisation of normally distributed outcomes allowing the presentation of a comparison of proportions with a measure of precision which reflects the comparison of means. Many common health outcomes are skewed so that the distributional method for the dichotomisation of continuous outcomes may not apply. We present a methodology to obtain dichotomised outcomes for skewed variables illustrated with data from several observational studies. We also report the results of a simulation study which tests the robustness of the method to deviation from normality and assess the validity of the newly developed method. The review showed that the pattern of dichotomisation was varying between outcomes. Birthweight, Blood pressure and BMI can either be transformed to normal so that normal distributional estimates for a comparison of proportions can be obtained or better, the skew-normal method can be used. For gestational age, no satisfactory transformation is available and only the skew-normal method is reliable. The normal distributional method is reliable also when there are small deviations from normality. The distributional method with its applicability for common skewed data allows researchers to provide both continuous and dichotomised estimates without losing information or precision. This will have the effect of providing a practical understanding of the difference in means in terms of proportions.
Zolal, Amir; Juratli, Tareq A; Linn, Jennifer; Podlesek, Dino; Sitoci Ficici, Kerim Hakan; Kitzler, Hagen H; Schackert, Gabriele; Sobottka, Stephan B; Rieger, Bernhard; Krex, Dietmar
2016-05-01
Objective To determine the value of apparent diffusion coefficient (ADC) histogram parameters for the prediction of individual survival in patients undergoing surgery for recurrent glioblastoma (GBM) in a retrospective cohort study. Methods Thirty-one patients who underwent surgery for first recurrence of a known GBM between 2008 and 2012 were included. The following parameters were collected: age, sex, enhancing tumor size, mean ADC, median ADC, ADC skewness, ADC kurtosis and fifth percentile of the ADC histogram, initial progression free survival (PFS), extent of second resection and further adjuvant treatment. The association of these parameters with survival and PFS after second surgery was analyzed using log-rank test and Cox regression. Results Using log-rank test, ADC histogram skewness of the enhancing tumor was significantly associated with both survival (p = 0.001) and PFS after second surgery (p = 0.005). Further parameters associated with prolonged survival after second surgery were: gross total resection at second surgery (p = 0.026), tumor size (0.040) and third surgery (p = 0.003). In the multivariate Cox analysis, ADC histogram skewness was shown to be an independent prognostic factor for survival after second surgery. Conclusion ADC histogram skewness of the enhancing lesion, enhancing lesion size, third surgery, as well as gross total resection have been shown to be associated with survival following the second surgery. ADC histogram skewness was an independent prognostic factor for survival in the multivariate analysis.
Testing and performance analysis of a 650 Mbps QPPM modem for free-space laser communications
NASA Astrophysics Data System (ADS)
Mortensen, Dale J.
1994-08-01
The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation (QPPM) at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.
Gaussian polarizable-ion tight binding.
Boleininger, Max; Guilbert, Anne Ay; Horsfield, Andrew P
2016-10-14
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
Gaussian polarizable-ion tight binding
NASA Astrophysics Data System (ADS)
Boleininger, Max; Guilbert, Anne AY; Horsfield, Andrew P.
2016-10-01
To interpret ultrafast dynamics experiments on large molecules, computer simulation is required due to the complex response to the laser field. We present a method capable of efficiently computing the static electronic response of large systems to external electric fields. This is achieved by extending the density-functional tight binding method to include larger basis sets and by multipole expansion of the charge density into electrostatically interacting Gaussian distributions. Polarizabilities for a range of hydrocarbon molecules are computed for a multipole expansion up to quadrupole order, giving excellent agreement with experimental values, with average errors similar to those from density functional theory, but at a small fraction of the cost. We apply the model in conjunction with the polarizable-point-dipoles model to estimate the internal fields in amorphous poly(3-hexylthiophene-2,5-diyl).
NASA Astrophysics Data System (ADS)
Godfrey, B.; Majdalani, J.
2014-11-01
This study relies on computational fluid dynamics (CFD) tools to analyse a possible method for creating a stable quadrupole vortex within a simulated, circular-port, cylindrical rocket chamber. A model of the vortex generator is created in a SolidWorks CAD program and then the grid is generated using the Pointwise mesh generation software. The non-reactive flowfield is simulated using an open source computational program, Stanford University Unstructured (SU2). Subsequent analysis and visualization are performed using ParaView. The vortex generation approach that we employ consists of four tangentially injected monopole vortex generators that are arranged symmetrically with respect to the center of the chamber in such a way to produce a quadrupole vortex with a common downwash. The present investigation focuses on characterizing the flow dynamics so that future investigations can be undertaken with increasing levels of complexity. Our CFD simulations help to elucidate the onset of vortex filaments within the monopole tubes, and the evolution of quadrupole vortices downstream of the injection faceplate. Our results indicate that the quadrupole vortices produced using the present injection pattern can become quickly unstable to the extent of dissipating soon after being introduced into simulated rocket chamber. We conclude that a change in the geometrical configuration will be necessary to produce more stable quadrupoles.
NASA Astrophysics Data System (ADS)
Florek-Wojciechowska, M.; Wojciechowski, M.; Jakubas, R.; Brym, Sz.; Kruk, D.
2016-02-01
1H nuclear magnetic resonance relaxometry has been applied to reveal information on dynamics and structure of Gu3Bi2I9 ([Gu = C(NH2)3] denotes guanidinium cation). The data have been analyzed in terms of a theory of quadrupole relaxation enhancement, which has been extended here by including effects associated with quadrupole (14N) spin relaxation caused by a fast fluctuating component of the electric field gradient tensor. Two motional processes have been identified: a slow one occurring on a timescale of about 8 × 10-6 s which has turned out to be (almost) temperature independent, and a fast process in the range of 10-9 s. From the 1H-14N relaxation contribution (that shows "quadrupole peaks") the quadrupole parameters, which are a fingerprint of the arrangement of the anionic network, have been determined. It has been demonstrated that the magnitude of the quadrupole coupling considerably changes with temperature and the changes are not caused by phase transitions. At the same time, it has been shown that there is no evidence of abrupt changes in the cationic dynamics and the anionic substructure upon the phase transitions.
Mass resolution of linear quadrupole ion traps with round rods.
Douglas, D J; Konenkov, N V
2014-11-15
Auxiliary dipole excitation is widely used to eject ions from linear radio-frequency quadrupole ion traps for mass analysis. Linear quadrupoles are often constructed with round rod electrodes. The higher multipoles introduced to the electric potential by round rods might be expected to change the ion ejection process. We have therefore investigated the optimum ratio of rod radius, r, to field radius, r0, for excitation and ejection of ions. Trajectory calculations are used to determine the excitation contour, S(q), the fraction of ions ejected when trapped at q values close to the ejection (or excitation) q. Initial conditions are randomly selected from Gaussian distributions of the x and y coordinates and a thermal distribution of velocities. The N = 6 (12 pole) and N = 10 (20 pole) multipoles are added to the quadrupole potential. Peak shapes and resolution were calculated for ratios r/r0 from 1.09 to 1.20 with an excitation time of 1000 cycles of the trapping radio-frequency. Ratios r/r0 in the range 1.140 to 1.160 give the highest resolution and peaks with little tailing. Ratios outside this range give lower resolution and peaks with tails on either the low-mass side or the high-mass side of the peaks. This contrasts with the optimum ratio of 1.126-1.130 for a quadrupole mass filter operated conventionally at the tip of the first stability region. With the optimum geometry the resolution is 2.7 times greater than with an ideal quadrupole field. Adding only a 2.0% hexapole field to a quadrupole field increases the resolution by a factor of 1.6 compared with an ideal quadrupole field. Addition of a 2.0% octopole lowers resolution and degrades peak shape. With the optimum value of r/r0 , the resolution increases with the ejection time (measured in cycles of the trapping rf, n) approximately as R0.5 = 6.64n, in contrast to a pure quadrupole field where R0.5 = 1.94n. Adding weak nonlinear fields to a quadrupole field can improve the resolution with mass-selective ejection of ions by up to a factor of 2.7. The optimum ratio r/r0 is 1.14 to 1.16, which differs from the optimum ratio for a mass filter of 1.128-1.130. Copyright © 2014 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Pipień, M.
2008-09-01
We present the results of an application of Bayesian inference in testing the relation between risk and return on the financial instruments. On the basis of the Intertemporal Capital Asset Pricing Model, proposed by Merton we built a general sampling distribution suitable in analysing this relationship. The most important feature of our assumptions is that the skewness of the conditional distribution of returns is used as an alternative source of relation between risk and return. This general specification relates to Skewed Generalized Autoregressive Conditionally Heteroscedastic-in-Mean model. In order to make conditional distribution of financial returns skewed we considered the unified approach based on the inverse probability integral transformation. In particular, we applied hidden truncation mechanism, inverse scale factors, order statistics concept, Beta and Bernstein distribution transformations and also a constructive method. Based on the daily excess returns on the Warsaw Stock Exchange Index we checked the empirical importance of the conditional skewness assumption on the relation between risk and return on the Warsaw Stock Market. We present posterior probabilities of all competing specifications as well as the posterior analysis of the positive sign of the tested relationship.
Ibbotson, Paul
2013-01-01
We use the Google Ngram database, a corpus of 5,195,769 digitized books containing ~4% of all books ever published, to test three ideas that are hypothesized to account for linguistic generalizations: verbal semantics, pre-emption and skew. Using 828,813 tokens of un-forms as a test case for these mechanisms, we found verbal semantics was a good predictor of the frequency of un-forms in the English language over the past 200 years—both in terms of how the frequency changed over time and their frequency rank. We did not find strong evidence for the direct competition of un-forms and their top pre-emptors, however the skew of the un-construction competitors was inversely correlated with the acceptability of the un-form. We suggest a cognitive explanation for this, namely, that the more the set of relevant pre-emptors is skewed then the more easily it is retrieved from memory. This suggests that it is not just the frequency of pre-emptive forms that must be taken into account when trying to explain usage patterns but their skew as well. PMID:24399991
Cross-frame connection details for skewed steel bridges.
DOT National Transportation Integrated Search
2010-10-30
This report documents a research investigation on connection details and bracing layouts for stability : bracing of steel bridges with skewed supports. Cross-frames and diaphragms play an important role in stabilizing : steel girders, particularly du...
Moreta, Cristina; Tena, María Teresa
2014-08-15
An analytical method is proposed to determine ten perfluorinated alkyl acids (PFAAs) [nine perfluorocarboxylic acids (PFCAs) and perfluorooctane sulfonate (PFOS)] in corn, popcorn and microwave popcorn packaging by focused ultrasound solid-liquid extraction (FUSLE) and ultra high performance liquid chromatography (UHPLC) coupled to quadrupole-time of flight mass spectrometry (QTOF-MS/MS). Selected PFAAs were extracted efficiently in only one 10-s cycle by FUSLE, a simple, safe and inexpensive technique. The developed method was validated for microwave popcorn bags matrix as well as corn and popcorn matrices in terms of linearity, matrix effect error, detection and quantification limits, repeatability and recovery values. The method showed good accuracy with recovery values around 100% except for the lowest chain length PFAAs, satisfactory reproducibility with RSDs under 16%, and sensitivity with limits of detection in the order of hundreds picograms per gram of sample (between 0.2 and 0.7ng/g). This method was also applied to the analysis of six microwave popcorn bags and the popcorn inside before and after cooking. PFCAs contents between 3.50ng/g and 750ng/g were found in bags, being PFHxA (perfluorohexanoic acid) the most abundant of them. However, no PFAAs were detected either corn or popcorn, therefore no migration was assumed. Copyright © 2014 Elsevier B.V. All rights reserved.
Parkison, Adam J.; Nelson, Andrew Thomas
2016-01-11
An analytical technique is presented with the goal of measuring reaction kinetics during steam oxidation reactions for three cases in which obtaining kinetics information often requires a prohibitive amount of time and cost. The technique presented relies on coupling thermogravimetric analysis (TGA) with a quantitative hydrogen measurement technique using quadrupole mass spectrometry (QMS). The first case considered is in differentiating between the kinetics of steam oxidation reactions and those for simultaneously reacting gaseous impurities such as nitrogen or oxygen. The second case allows one to independently measure the kinetics of oxide and hydride formation for systems in which both ofmore » these reactions are known to take place during steam oxidation. The third case deals with measuring the kinetics of formation for competing volatile and non-volatile oxides during certain steam oxidation reactions. In order to meet the requirements of the coupled technique, a methodology is presented which attempts to provide quantitative measurement of hydrogen generation using QMS in the presence of an interfering fragmentation species, namely water vapor. This is achieved such that all calibrations and corrections are performed during the TGA baseline and steam oxidation programs, making system operation virtually identical to standard TGA. Benchmarking results showed a relative error in hydrogen measurement of 5.7–8.4% following the application of a correction factor. Lastly, suggestions are made for possible improvements to the presented technique so that it may be better applied to the three cases presented.« less
Fang, Lian-xiang; Xiong, Ai-zhen; Wang, Rui; Ji, Shen; Yang, Li; Wang, Zheng-tao
2013-09-01
The objective of this study was to develop an effective strategy for screening and identifying mycotoxins in herbal medicine (HM). Here, Imperatae Rhizoma, a commonly used Chinese herb, was selected as a model HM. A crude drug contaminated with fungi was analyzed by comparing with uncontaminated ones. Ultra-performance LC coupled to tandem quadrupole TOF-MS (UPLC-Q-TOF-MS) with collision energy function was applied to analyze different samples from Imperatae Rhizoma. Then, MarkerLynx(TM) software was employed to screen the excess components in analytes, compared with control samples, and those selected markers were likely to be the metabolites of fungi. Furthermore, each of the accurate masses of the markers obtained from MarkerLynx(TM) was then searched in a mycotoxins/fungal metabolites database established in advance. The molecular formulas with relative mass error between the measured and theoretical mass within 5 ppm were chosen and then applied to MassFragment(TM) analysis for further confirmation of their structures. With the use of this approach, five mycotoxins that have never been reported in HM were identified in contaminated Imperatae Rhizoma. The results demonstrate the potential of UPLC-Q-TOF-MS coupled with the MarkerLynx(TM) software and MassFragment(TM) tool as an efficient and convenient method to screen and identify mycotoxins in herbal materials and aid in the quality control of HM. © 2013 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Field stabilization studies for a radio frequency quadrupole accelerator
NASA Astrophysics Data System (ADS)
Gaur, R.; Kumar, V.
2014-07-01
The Radio Frequency Quadrupole (RFQ) linear accelerator is an accelerator that efficiently focuses, bunches and accelerates a high intensity DC beam from an ion source, for various applications. Unlike other conventional RF linear accelerators, the electromagnetic mode used for its operation is not the lowest frequency mode supported by the structure. In a four vane type RFQ, there are several undesired electromagnetic modes having frequency close to that of the operating mode. While designing an RFQ accelerator, care must be taken to ensure that the frequencies of these nearby modes are sufficiently separated from the operating mode. If the undesired nearby modes have frequencies close to the operating mode, the electromagnetic field pattern in the presence of geometrical errors will not be stabilized to the desired field profile, and will be perturbed by the nearby modes. This will affect the beam dynamics and reduce the beam transmission. In this paper, we present a detailed study of the electromagnetic modes supported, which is followed by calculations for implementation of suitable techniques to make the desired operating mode stable against mixing with unwanted modes for an RFQ being designed for the proposed Indian Spallation Neutron Source (ISNS) project at Raja Ramanna Centre for Advanced Technology, Indore. Resonant coupling scheme, along with dipole stabilization rods has been proposed to increase the mode separation. The paper discusses the details of a generalized optimization procedure that has been used for the design of mode stabilization scheme.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Parkison, Adam J.; Nelson, Andrew Thomas
An analytical technique is presented with the goal of measuring reaction kinetics during steam oxidation reactions for three cases in which obtaining kinetics information often requires a prohibitive amount of time and cost. The technique presented relies on coupling thermogravimetric analysis (TGA) with a quantitative hydrogen measurement technique using quadrupole mass spectrometry (QMS). The first case considered is in differentiating between the kinetics of steam oxidation reactions and those for simultaneously reacting gaseous impurities such as nitrogen or oxygen. The second case allows one to independently measure the kinetics of oxide and hydride formation for systems in which both ofmore » these reactions are known to take place during steam oxidation. The third case deals with measuring the kinetics of formation for competing volatile and non-volatile oxides during certain steam oxidation reactions. In order to meet the requirements of the coupled technique, a methodology is presented which attempts to provide quantitative measurement of hydrogen generation using QMS in the presence of an interfering fragmentation species, namely water vapor. This is achieved such that all calibrations and corrections are performed during the TGA baseline and steam oxidation programs, making system operation virtually identical to standard TGA. Benchmarking results showed a relative error in hydrogen measurement of 5.7–8.4% following the application of a correction factor. Lastly, suggestions are made for possible improvements to the presented technique so that it may be better applied to the three cases presented.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yu, Yaowei; Hu, Jiansheng, E-mail: hujs@ipp.ac.cn; Wan, Zhao
2016-03-15
Deuterium pressure in deuterium-helium mixture gas is successfully measured by a common quadrupole mass spectrometer (model: RGA200) with a resolution of ∼0.5 atomic mass unit (AMU), by using varied ionization energy together with new developed software and dedicated calibration for RGA200. The new software is developed by using MATLAB with the new functions: electron energy (EE) scanning, deuterium partial pressure measurement, and automatic data saving. RGA200 with new software is calibrated in pure deuterium and pure helium 1.0 × 10{sup −6}–5.0 × 10{sup −2} Pa, and the relation between pressure and ion current of AMU4 under EE = 25 eVmore » and EE = 70 eV is obtained. From the calibration result and RGA200 scanning with varied ionization energy in deuterium and helium mixture gas, both deuterium partial pressures (P{sub D{sub 2}}) and helium partial pressure (P{sub He}) could be obtained. The result shows that deuterium partial pressure could be measured if P{sub D{sub 2}} > 10{sup −6} Pa (limited by ultimate pressure of calibration vessel), and helium pressure could be measured only if P{sub He}/P{sub D{sub 2}} > 0.45, and the measurement error is evaluated as 15%. This method is successfully employed in EAST 2015 summer campaign to monitor deuterium outgassing/desorption during helium discharge cleaning.« less
NASA Astrophysics Data System (ADS)
Wang, Evelyn H.; Combe, Peter C.; Schug, Kevin A.
2016-05-01
Methods that can efficiently and effectively quantify proteins are needed to support increasing demand in many bioanalytical fields. Triple quadrupole mass spectrometry (QQQ-MS) is sensitive and specific, and it is routinely used to quantify small molecules. However, low resolution fragmentation-dependent MS detection can pose inherent difficulties for intact proteins. In this research, we investigated variables that affect protein and fragment ion signals to enable protein quantitation using QQQ-MS. Collision induced dissociation gas pressure and collision energy were found to be the most crucial variables for optimization. Multiple reaction monitoring (MRM) transitions for seven standard proteins, including lysozyme, ubiquitin, cytochrome c from both equine and bovine, lactalbumin, myoglobin, and prostate-specific antigen (PSA) were determined. Assuming the eventual goal of applying such methodology is to analyze protein in biological fluids, a liquid chromatography method was developed. Calibration curves of six standard proteins (excluding PSA) were obtained to show the feasibility of intact protein quantification using QQQ-MS. Linearity (2-3 orders), limits of detection (0.5-50 μg/mL), accuracy (<5% error), and precision (1%-12% CV) were determined for each model protein. Sensitivities for different proteins varied considerably. Biological fluids, including human urine, equine plasma, and bovine plasma were used to demonstrate the specificity of the approach. The purpose of this model study was to identify, study, and demonstrate the advantages and challenges for QQQ-MS-based intact protein quantitation, a largely underutilized approach to date.
Wang, Zhenzhong; Geng, Jianliang; Dai, Yi; Xiao, Wei; Yao, Xinsheng
2015-01-01
The broad applications and mechanism explorations of traditional Chinese medicine prescriptions (TCMPs) require a clear understanding of TCMP chemical constituents. In the present study, we describe an efficient and universally applicable analytical approach based on ultra-performance liquid chromatography coupled to electrospray ionization tandem quadrupole time-of-flight mass spectrometry (UPLC-ESI-Q/TOF-MS) with the MSE (E denotes collision energy) data acquisition mode, which allowed the rapid separation and reliable determination of TCMP chemical constituents. By monitoring diagnostic ions in the high energy function of MSE, target peaks of analogous compounds in TCMPs could be rapidly screened and identified. “Re-Du-Ning” injection (RDN), a eutherapeutic traditional Chinese medicine injection (TCMI) that has been widely used to reduce fever caused by viral infections in clinical practice, was studied as an example. In total, 90 compounds, including five new iridoids and one new sesquiterpene, were identified or tentatively characterized by accurate mass measurements within 5 ppm error. This analysis was accompanied by MS fragmentation and reference standard comparison analyses. Furthermore, the herbal sources of these compounds were unambiguously confirmed by comparing the extracted ion chromatograms (EICs) of RDN and ingredient herbal extracts. Our work provides a certain foundation for further studies of RDN. Moreover, the analytical approach developed herein has proven to be generally applicable for profiling the chemical constituents in TCMPs and other complicated mixtures. PMID:25875968
Skew chicane based betatron eigenmode exchange module
Douglas, David
2010-12-28
A skewed chicane eigenmode exchange module (SCEEM) that combines in a single beamline segment the separate functionalities of a skew quad eigenmode exchange module and a magnetic chicane. This module allows the exchange of independent betatron eigenmodes, alters electron beam orbit geometry, and provides longitudinal parameter control with dispersion management in a single beamline segment with stable betatron behavior. It thus reduces the spatial requirements for multiple beam dynamic functions, reduces required component counts and thus reduces costs, and allows the use of more compact accelerator configurations than prior art design methods.
Design and fabrication of a basic mass analyzer and vacuum system
NASA Technical Reports Server (NTRS)
Judson, C. M.; Josias, C.; Lawrence, J. L., Jr.
1977-01-01
A two-inch hyperbolic rod quadrupole mass analyzer with a mass range of 400 to 200 amu and a sensitivity exceeding 100 packs per billion has been developed and tested. This analyzer is the basic hardware portion of a microprocessor-controlled quadrupole mass spectrometer for a Gas Analysis and Detection System (GADS). The development and testing of the hyperbolic-rod quadrupole mass spectrometer and associated hardware are described in detail.
Ellipsoidal universe can solve the cosmic microwave background quadrupole problem.
Campanelli, L; Cea, P; Tedesco, L
2006-09-29
The recent 3 yr Wilkinson Microwave Anisotropy Probe data have confirmed the anomaly concerning the low quadrupole amplitude compared to the best-fit Lambda-cold dark matter prediction. We show that by allowing the large-scale spatial geometry of our universe to be plane symmetric with eccentricity at decoupling or order 10(-2), the quadrupole amplitude can be drastically reduced without affecting higher multipoles of the angular power spectrum of the temperature anisotropy.
Stability of an aqueous quadrupole micro-trap
Park, Jae Hyun; Krstić, Predrag S.
2012-03-30
Recently demonstrated functionality of an aqueous quadrupole micro- or nano-trap opens a new avenue for applications of the Paul traps, like is confinement of a charged biomolecule which requires water environment for its chemical stability. Besides strong viscosity forces, motion of a charged particle in the aqueous trap is subject to dielectrophoretic and electrophoretic forces. In this study, we describe the general conditions for stability of a charged particle in an aqueous quadrupole trap. We find that for the typical micro-trap parameters, effects of both dielectrophoresis and electrophoresis significantly influence the trap stability. In particular, the aqueous quadrupole trap couldmore » play of a role of a synthetic virtual nanopore for the 3rd generation of DNA sequencing technology.« less
First Test Results of the 150 mm Aperture IR Quadrupole Models for the High Luminosity LHC
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ambrosio, G.; Chlachidze, G.; Wanderer, P.
2016-10-06
The High Luminosity upgrade of the LHC at CERN will use large aperture (150 mm) quadrupole magnets to focus the beams at the interaction points. The high field in the coils requires Nb3Sn superconductor technology, which has been brought to maturity by the LHC Accelerator Re-search Program (LARP) over the last 10 years. The key design targets for the new IR quadrupoles were established in 2012, and fabrication of model magnets started in 2014. This paper discusses the results from the first single short coil test and from the first short quadrupole model test. Remaining challenges and plans to addressmore » them are also presented and discussed.« less
Chen, Xiaojian; Oshima, Kiyoko; Schott, Diane; Wu, Hui; Hall, William; Song, Yingqiu; Tao, Yalan; Li, Dingjie; Zheng, Cheng; Knechtges, Paul; Erickson, Beth; Li, X Allen
2017-01-01
In an effort for early assessment of treatment response, we investigate radiation induced changes in quantitative CT features of tumor during the delivery of chemoradiation therapy (CRT) for pancreatic cancer. Diagnostic-quality CT data acquired daily during routine CT-guided CRT using a CT-on-rails for 20 pancreatic head cancer patients were analyzed. On each daily CT, the pancreatic head, the spinal cord and the aorta were delineated and the histograms of CT number (CTN) in these contours were extracted. Eight histogram-based radiomic metrics including the mean CTN (MCTN), peak position, volume, standard deviation (SD), skewness, kurtosis, energy and entropy were calculated for each fraction. Paired t-test was used to check the significance of the change of specific metric at specific time. GEE model was used to test the association between changes of metrics over time for different pathology responses. In general, CTN histogram in the pancreatic head (but not in spinal cord) changed during the CRT delivery. Changes from the 1st to the 26th fraction in MCTN ranged from -15.8 to 3.9 HU with an average of -4.7 HU (p<0.001). Meanwhile the volume decreased, the skewness increased (less skewed), and the kurtosis decreased (less peaked). The changes of MCTN, volume, skewness, and kurtosis became significant after two weeks of treatment. Patient pathological response is associated with the changes of MCTN, SD, and skewness. In cases of good response, patients tend to have large reductions in MCTN and skewness, and large increases in SD and kurtosis. Significant changes in CT radiomic features, such as the MCTN, skewness, and kurtosis in tumor were observed during the course of CRT for pancreas cancer based on quantitative analysis of daily CTs. These changes may be potentially used for early assessment of treatment response and stratification for therapeutic intensification.
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
2015-11-24
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
Determining collective barrier operation skew in a parallel computer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faraj, Daniel A.
Determining collective barrier operation skew in a parallel computer that includes a number of compute nodes organized into an operational group includes: for each of the nodes until each node has been selected as a delayed node: selecting one of the nodes as a delayed node; entering, by each node other than the delayed node, a collective barrier operation; entering, after a delay by the delayed node, the collective barrier operation; receiving an exit signal from a root of the collective barrier operation; and measuring, for the delayed node, a barrier completion time. The barrier operation skew is calculated by:more » identifying, from the compute nodes' barrier completion times, a maximum barrier completion time and a minimum barrier completion time and calculating the barrier operation skew as the difference of the maximum and the minimum barrier completion time.« less
Seyed Moosavi, Seyed Mohsen; Moaveni, Bijan; Moshiri, Behzad; Arvan, Mohammad Reza
2018-02-27
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors.
Seyed Moosavi, Seyed Mohsen; Moshiri, Behzad; Arvan, Mohammad Reza
2018-01-01
The present study designed skewed redundant accelerometers for a Measurement While Drilling (MWD) tool and executed auto-calibration, fault diagnosis and isolation of accelerometers in this tool. The optimal structure includes four accelerometers was selected and designed precisely in accordance with the physical shape of the existing MWD tool. A new four-accelerometer structure was designed, implemented and installed on the current system, replacing the conventional orthogonal structure. Auto-calibration operation of skewed redundant accelerometers and all combinations of three accelerometers have been done. Consequently, biases, scale factors, and misalignment factors of accelerometers have been successfully estimated. By defecting the sensors in the new optimal skewed redundant structure, the fault was detected using the proposed FDI method and the faulty sensor was diagnosed and isolated. The results indicate that the system can continue to operate with at least three correct sensors. PMID:29495434
Wikberg, Eva C; Jack, Katharine M; Fedigan, Linda M; Campos, Fernando A; Yashima, Akiko S; Bergstrom, Mackenzie L; Hiwatashi, Tomohide; Kawamura, Shoji
2017-01-01
Reproductive skew in multimale groups may be determined by the need for alpha males to offer reproductive opportunities as staying incentives to subordinate males (concessions), by the relative fighting ability of the alpha male (tug-of-war) or by how easily females can be monopolized (priority-of-access). These models have rarely been investigated in species with exceptionally long male tenures, such as white-faced capuchins, where female mate choice for novel unrelated males may be important in shaping reproductive skew. We investigated reproductive skew in white-faced capuchins at Sector Santa Rosa, Costa Rica, using 20 years of demographic, behavioural and genetic data. Infant survival and alpha male reproductive success were highest in small multimale groups, which suggests that the presence of subordinate males can be beneficial to the alpha male, in line with the concession model's assumptions. None of the skew models predicted the observed degree of reproductive sharing, and the probability of an alpha male producing offspring was not affected by his relatedness to subordinate males, whether he resided with older subordinate males, whether he was prime aged, the number of males or females in the group or the number of infants conceived within the same month. Instead, the alpha male's probability of producing offspring decreased when he was the sire of the mother, was weak and lacked a well-established position and had a longer tenure. Because our data best supported the inbreeding avoidance hypothesis and female choice for strong novel mates, these hypotheses should be taken into account in future skew models. © 2016 John Wiley & Sons Ltd.
Clustering fossil from primordial gravitational waves in anisotropic inflation
DOE Office of Scientific and Technical Information (OSTI.GOV)
Emami, Razieh; Firouzjahi, Hassan, E-mail: emami@ipm.ir, E-mail: firouz@ipm.ir
2015-10-01
Inflationary models can correlate small-scale density perturbations with the long-wavelength gravitational waves (GW) in the form of the Tensor-Scalar-Scalar (TSS) bispectrum. This correlation affects the mass-distribution in the Universe and leads to the off-diagonal correlations of the density field modes in the form of the quadrupole anisotropy. Interestingly, this effect survives even after the tensor mode decays when it re-enters the horizon, known as the fossil effect. As a result, the off-diagonal correlation function between different Fourier modes of the density fluctuations can be thought as a way to probe the large-scale GW and the mechanism of inflation behind themore » fossil effect. Models of single field slow roll inflation generically predict a very small quadrupole anisotropy in TSS while in models of multiple fields inflation this effect can be observable. Therefore this large scale quadrupole anisotropy can be thought as a spectroscopy for different inflationary models. In addition, in models of anisotropic inflation there exists quadrupole anisotropy in curvature perturbation power spectrum. Here we consider TSS in models of anisotropic inflation and show that the shape of quadrupole anisotropy is different than in single field models. In fact, in these models, quadrupole anisotropy is projected into the preferred direction and its amplitude is proportional to g{sub *} N{sub e} where N{sub e} is the number of e-folds and g{sub *} is the amplitude of quadrupole anisotropy in curvature perturbation power spectrum. We use this correlation function to estimate the large scale GW as well as the preferred direction and discuss the detectability of the signal in the galaxy surveys like Euclid and 21 cm surveys.« less
Oberacher, Herbert; Pavlic, Marion; Libiseller, Kathrin; Schubert, Birthe; Sulyok, Michael; Schuhmacher, Rainer; Csaszar, Edina; Köfeler, Harald C
2009-04-01
A sophisticated matching algorithm developed for highly efficient identity search within tandem mass spectral libraries is presented. For the optimization of the search procedure a collection of 410 tandem mass spectra corresponding to 22 compounds was used. The spectra were acquired in three different laboratories on four different instruments. The following types of tandem mass spectrometric instruments were used: quadrupole-quadrupole-time-of-flight (QqTOF), quadrupole-quadrupole-linear ion trap (QqLIT), quadrupole-quadrupole-quadrupole (QqQ), and linear ion trap-Fourier transform ion cyclotron resonance mass spectrometer (LIT-FTICR). The obtained spectra were matched to an established MS/MS-spectral library that contained 3759 MS/MS-spectra corresponding to 402 different reference compounds. All 22 test compounds were part of the library. A dynamic intensity cut-off, the search for neutral losses, and optimization of the formula used to calculate the match probability were shown to significantly enhance the performance of the presented library search approach. With the aid of these features the average number of correct assignments was increased to 98%. For statistical evaluation of the match reliability the set of fragment ion spectra was extended with 300 spectra corresponding to 100 compounds not included in the reference library. Performance was checked with the aid of receiver operating characteristic (ROC) curves. Using the magnitude of the match probability as well as the precursor ion mass as benchmarks to rate the obtained top hit, overall correct classification of a compound being included or not included in the mass spectrometric library, was obtained in more than 95% of cases clearly indicating a high predictive accuracy of the established matching procedure. Copyright (c) 2009 John Wiley & Sons, Ltd.
Vento, V Thatar; Bergueiro, J; Cartelli, D; Valda, A A; Kreiner, A J
2011-12-01
Within the frame of an ongoing project to develop a folded Tandem-Electrostatic-Quadrupole (TESQ) accelerator facility for Accelerator-Based Boron Neutron Capture Therapy (AB-BNCT), we discuss here the electrostatic design of the machine, including the accelerator tubes with electrostatic quadrupoles and the simulations for the transport and acceleration of a high intensity beam. Copyright © 2011 Elsevier Ltd. All rights reserved.
Higher order parametric excitation modes for spaceborne quadrupole mass spectrometers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gershman, D. J.; Block, B. P.; Rubin, M.
This paper describes a technique to significantly improve upon the mass peak shape and mass resolution of spaceborne quadrupole mass spectrometers (QMSs) through higher order auxiliary excitation of the quadrupole field. Using a novel multiresonant tank circuit, additional frequency components can be used to drive modulating voltages on the quadrupole rods in a practical manner, suitable for both improved commercial applications and spaceflight instruments. Auxiliary excitation at frequencies near twice that of the fundamental quadrupole RF frequency provides the advantages of previously studied parametric excitation techniques, but with the added benefit of increased sensed excitation amplitude dynamic range and themore » ability to operate voltage scan lines through the center of upper stability islands. Using a field programmable gate array, the amplitudes and frequencies of all QMS signals are digitally generated and managed, providing a robust and stable voltage control system. These techniques are experimentally verified through an interface with a commercial Pfeiffer QMG422 quadrupole rod system. When operating through the center of a stability island formed from higher order auxiliary excitation, approximately 50% and 400% improvements in 1% mass resolution and peak stability were measured, respectively, when compared with traditional QMS operation. Although tested with a circular rod system, the presented techniques have the potential to improve the performance of both circular and hyperbolic rod geometry QMS sensors.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
Florek-Wojciechowska, M.; Wojciechowski, M.; Brym, Sz.
{sup 1}H nuclear magnetic resonance relaxometry has been applied to reveal information on dynamics and structure of Gu{sub 3}Bi{sub 2}I{sub 9} ([Gu = C(NH{sub 2}){sub 3}] denotes guanidinium cation). The data have been analyzed in terms of a theory of quadrupole relaxation enhancement, which has been extended here by including effects associated with quadrupole ({sup 14}N) spin relaxation caused by a fast fluctuating component of the electric field gradient tensor. Two motional processes have been identified: a slow one occurring on a timescale of about 8 × 10{sup −6} s which has turned out to be (almost) temperature independent, andmore » a fast process in the range of 10{sup −9} s. From the {sup 1}H-{sup 14}N relaxation contribution (that shows “quadrupole peaks”) the quadrupole parameters, which are a fingerprint of the arrangement of the anionic network, have been determined. It has been demonstrated that the magnitude of the quadrupole coupling considerably changes with temperature and the changes are not caused by phase transitions. At the same time, it has been shown that there is no evidence of abrupt changes in the cationic dynamics and the anionic substructure upon the phase transitions.« less
NASA Astrophysics Data System (ADS)
Hardiyanto, M.; Ermawaty, I. R.
2018-01-01
We present an experimental of muan-hadron tunneling chain investigation with new methods of Thx DUO2 nano structure based on Josephson’s tunneling and Abrikosov-Balseiro-Russel (ABR) formulation with quantum quadrupole interacting with a strongly localized high gyro-magnetic optical field as encountered in high-resolution near-field optical microscopy for 1.2 nano meter lambda-function. The strong gradients of these localized gyro-magnetic fields suggest that higher-order multipolar interactions will affect the standard magnetic quadrupole transition rates in 1.8 x 103 currie/mm fuel energy in nuclear moderator pool and selection rules with quatum dot. For muan-hadron absorption in Josephson’s tunnelling quantum quadrupole in the strong confinement limit we calculated the inter band of gyro-magnetic quadrupole absorption rate and the associated selection rules. Founded that the magnetic quadrupole absorption rate is comparable with the absorption rate calculated in the gyro-magneticdipole approximation of ThxDUO2 nano material structure. This implies that near-field optical techniques can extend the range of spectroscopic measurements for 545 MHz at quantum gyro-magnetic field until 561 MHz deployment quantum field at B around 455-485 tesla beyond the standard dipole approximation. However, we also show that spatial resolution could be improved by the selective excitation of ABR formulation in quantum quadrupole transitions.
Quadrupole-Quadrupole Interactions to Control Plasmon-Induced Transparency
NASA Astrophysics Data System (ADS)
Rana, Goutam; Deshmukh, Prathmesh; Palkhivala, Shalom; Gupta, Abhishek; Duttagupta, S. P.; Prabhu, S. S.; Achanta, VenuGopal; Agarwal, G. S.
2018-06-01
Radiative dipolar resonance with Lorentzian line-shape induces the otherwise dark quadrupolar resonances resulting in electromagnetically induced transparency (EIT). The two interfering excitation pathways of the dipole are earlier shown to result in a Fano line shape with a high figure of merit suitable for sensing. In metamaterials made of metal nanorods or antennas, the plasmonic EIT (PIT) efficiency depends on the overlap of the dark and bright mode spectra as well as the asymmetry resulting from the separation between the monomer (dipole) and dimer (quadrupole) that governs the coupling strength. Increasing asymmetry in these structures leads to the reduction of the figure of merit due to a broadening of the Fano resonance. We demonstrate a PIT system in which the simultaneous excitation of two dipoles result in double PIT. The corresponding two quadrupoles interact and control the quality factor (Q ) of the PIT resonance. We show an antiresonancelike symmetric line shape with nonzero asymmetry factors. The PIT resonance vanishes due to quadrupole-quadrupole coupling. A Q factor of more than 100 at 0.977 THz is observed, which is limited by the experimental resolution of 6 GHz. From polarization-dependent studies we show that the broadening of the Lorentzian resonance is due to scattering-induced excitation of orthogonally oriented dipoles in the monomer and dimer bars in the terahertz regime. The high Q factors in the terahertz frequency region demonstrated here are interesting for sensing application.
Induced CMB quadrupole from pointing offsets
DOE Office of Scientific and Technical Information (OSTI.GOV)
Moss, Adam; Scott, Douglas; Sigurdson, Kris, E-mail: adammoss@phas.ubc.ca, E-mail: dscott@phas.ubc.ca, E-mail: krs@phas.ubc.ca
2011-01-01
Recent claims in the literature have suggested that the WMAP quadrupole is not primordial in origin, and arises from an aliasing of the much larger dipole field because of incorrect satellite pointing. We attempt to reproduce this result and delineate the key physics leading to the effect. We find that, even if real, the induced quadrupole would be smaller than the WMAP value. We discuss reasons why the WMAP data are unlikely to suffer from this particular systematic effect, including the implications for observations of point sources. Given this evidence against the reality of the effect, the similarity between themore » pointing-offset-induced signal and the actual quadrupole then appears to be quite puzzling. However, we find that the effect arises from a convolution between the gradient of the dipole field and anisotropic coverage of the scan direction at each pixel. There is something of a directional conspiracy here — the dipole signal lies close to the Ecliptic Plane, and its direction, together with the WMAP scan strategy, results in a strong coupling to the Y{sub 2,−1} component in Ecliptic co-ordinates. The dominant strength of this component in the measured quadrupole suggests that one should exercise increased caution in interpreting its estimated amplitude. The Planck satellite has a different scan strategy which does not so directly couple the dipole and quadrupole in this way and will soon provide an independent measurement.« less
Thermal response of a highly skewed integral bridge.
DOT National Transportation Integrated Search
2012-06-01
The purpose of this study was to conduct a field evaluation of a highly skewed semi-integral bridge in order to provide : feedback regarding some of the assumptions behind the design guidelines developed by the Virginia Department of : Transportation...
Theoretical and field experimental evaluation of skewed modular slab bridges.
DOT National Transportation Integrated Search
2012-12-01
As a result of longitudinal cracking discovered in the concrete overlays of some recently-built skewed : bridges, the Maryland State Highway Administration (SHA) requested that this research project be : conducted for two purposes: (1) to determine t...
Analytical model and error analysis of arbitrary phasing technique for bunch length measurement
NASA Astrophysics Data System (ADS)
Chen, Qushan; Qin, Bin; Chen, Wei; Fan, Kuanjun; Pei, Yuanji
2018-05-01
An analytical model of an RF phasing method using arbitrary phase scanning for bunch length measurement is reported. We set up a statistical model instead of a linear chirp approximation to analyze the energy modulation process. It is found that, assuming a short bunch (σφ / 2 π → 0) and small relative energy spread (σγ /γr → 0), the energy spread (Y =σγ 2) at the exit of the traveling wave linac has a parabolic relationship with the cosine value of the injection phase (X = cosφr|z=0), i.e., Y = AX2 + BX + C. Analogous to quadrupole strength scanning for emittance measurement, this phase scanning method can be used to obtain the bunch length by measuring the energy spread at different injection phases. The injection phases can be randomly chosen, which is significantly different from the commonly used zero-phasing method. Further, the systematic error of the reported method, such as the influence of the space charge effect, is analyzed. This technique will be especially useful at low energies when the beam quality is dramatically degraded and is hard to measure using the zero-phasing method.
On the performance of large Gaussian basis sets for the computation of total atomization energies
NASA Technical Reports Server (NTRS)
Martin, J. M. L.
1992-01-01
The total atomization energies of a number of molecules have been computed using an augmented coupled-cluster method and (5s4p3d2f1g) and 4s3p2d1f) atomic natural orbital (ANO) basis sets, as well as the correlation consistent valence triple zeta plus polarization (cc-pVTZ) correlation consistent valence quadrupole zeta plus polarization (cc-pVQZ) basis sets. The performance of ANO and correlation consistent basis sets is comparable throughout, although the latter can result in significant CPU time savings. Whereas the inclusion of g functions has significant effects on the computed Sigma D(e) values, chemical accuracy is still not reached for molecules involving multiple bonds. A Gaussian-1 (G) type correction lowers the error, but not much beyond the accuracy of the G1 model itself. Using separate corrections for sigma bonds, pi bonds, and valence pairs brings down the mean absolute error to less than 1 kcal/mol for the spdf basis sets, and about 0.5 kcal/mol for the spdfg basis sets. Some conclusions on the success of the Gaussian-1 and Gaussian-2 models are drawn.
Wang, Liang; Yuan, Jin; Jiang, Hong; Yan, Wentao; Cintrón-Colón, Hector R; Perez, Victor L; DeBuc, Delia C; Feuer, William J; Wang, Jianhua
2016-03-01
This study determined (1) how many vessels (i.e., the vessel sampling) are needed to reliably characterize the bulbar conjunctival microvasculature and (2) if characteristic information can be obtained from the distribution histogram of the blood flow velocity and vessel diameter. Functional slitlamp biomicroscope was used to image hundreds of venules per subject. The bulbar conjunctiva in five healthy human subjects was imaged on six different locations in the temporal bulbar conjunctiva. The histograms of the diameter and velocity were plotted to examine whether the distribution was normal. Standard errors were calculated from the standard deviation and vessel sample size. The ratio of the standard error of the mean over the population mean was used to determine the sample size cutoff. The velocity was plotted as a function of the vessel diameter to display the distribution of the diameter and velocity. The results showed that the sampling size was approximately 15 vessels, which generated a standard error equivalent to 15% of the population mean from the total vessel population. The distributions of the diameter and velocity were not only unimodal, but also somewhat positively skewed and not normal. The blood flow velocity was related to the vessel diameter (r=0.23, P<0.05). This was the first study to determine the sampling size of the vessels and the distribution histogram of the blood flow velocity and vessel diameter, which may lead to a better understanding of the human microvascular system of the bulbar conjunctiva.
Improved cosmological constraints on the curvature and equation of state of dark energy
NASA Astrophysics Data System (ADS)
Pan, Nana; Gong, Yungui; Chen, Yun; Zhu, Zong-Hong
2010-08-01
We apply the Constitution compilation of 397 supernova Ia, the baryon acoustic oscillation measurements including the A parameter, the distance ratio and the radial data, the five-year Wilkinson microwave anisotropy probe and the Hubble parameter data to study the geometry of the Universe and the property of dark energy by using the popular Chevallier-Polarski-Linder and Jassal-Bagla-Padmanabhan parameterizations. We compare the simple χ2 method of joined contour estimation and the Monte Carlo Markov chain method, and find that it is necessary to make the marginalized analysis on the error estimation. The probabilities of Ωk and wa in the Chevallier-Polarski-Linder model are skew distributions, and the marginalized 1σ errors are Ωm = 0.279+0.015- 0.008, Ωk = 0.005+0.006- 0.011, w0 = -1.05+0.23- 0.06 and wa = 0.5+0.3- 1.5. For the Jassal-Bagla-Padmanabhan model, the marginalized 1σ errors are Ωm = 0.281+0.015- 0.01, Ωk = 0.000+0.007- 0.006, w0 = -0.96+0.25- 0.18 and wa = -0.6+1.9- 1.6. The equation of state parameter w(z) of dark energy is negative in the redshift range 0 <= z <= 2 at more than 3σ level. The flat ΛCDM model is consistent with the current observational data at the 1σ level.
Rochon, Justine; Kieser, Meinhard
2011-11-01
Student's one-sample t-test is a commonly used method when inference about the population mean is made. As advocated in textbooks and articles, the assumption of normality is often checked by a preliminary goodness-of-fit (GOF) test. In a paper recently published by Schucany and Ng it was shown that, for the uniform distribution, screening of samples by a pretest for normality leads to a more conservative conditional Type I error rate than application of the one-sample t-test without preliminary GOF test. In contrast, for the exponential distribution, the conditional level is even more elevated than the Type I error rate of the t-test without pretest. We examine the reasons behind these characteristics. In a simulation study, samples drawn from the exponential, lognormal, uniform, Student's t-distribution with 2 degrees of freedom (t(2) ) and the standard normal distribution that had passed normality screening, as well as the ingredients of the test statistics calculated from these samples, are investigated. For non-normal distributions, we found that preliminary testing for normality may change the distribution of means and standard deviations of the selected samples as well as the correlation between them (if the underlying distribution is non-symmetric), thus leading to altered distributions of the resulting test statistics. It is shown that for skewed distributions the excess in Type I error rate may be even more pronounced when testing one-sided hypotheses. ©2010 The British Psychological Society.
An efficient and robust method for predicting helicopter rotor high-speed impulsive noise
NASA Technical Reports Server (NTRS)
Brentner, Kenneth S.
1996-01-01
A new formulation for the Ffowcs Williams-Hawkings quadrupole source, which is valid for a far-field in-plane observer, is presented. The far-field approximation is new and unique in that no further approximation of the quadrupole source strength is made and integrands with r(exp -2) and r(exp -3) dependence are retained. This paper focuses on the development of a retarded-time formulation in which time derivatives are analytically taken inside the integrals to avoid unnecessary computational work when the observer moves with the rotor. The new quadrupole formulation is similar to Farassat's thickness and loading formulation 1A. Quadrupole noise prediction is carried out in two parts: a preprocessing stage in which the previously computed flow field is integrated in the direction normal to the rotor disk, and a noise computation stage in which quadrupole surface integrals are evaluated for a particular observer position. Preliminary predictions for hover and forward flight agree well with experimental data. The method is robust and requires computer resources comparable to thickness and loading noise prediction.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stoynev, S.; et al.
The development ofmore » $$Nb_3Sn$$ quadrupole magnets for the High-Luminosity LHC upgrade is a joint venture between the US LHC Accelerator Research Program (LARP)* and CERN with the goal of fabricating large aperture quadrupoles for the LHC in-teraction regions (IR). The inner triplet (low-β) NbTi quadrupoles in the IR will be replaced by the stronger Nb3Sn magnets boosting the LHC program of having 10-fold increase in integrated luminos-ity after the foreseen upgrades. Previously LARP conducted suc-cessful tests of short and long models with up to 120 mm aperture. The first short 150 mm aperture quadrupole model MQXFS1 was assembled with coils fabricated by both CERN and LARP. The magnet demonstrated strong performance at the Fermilab’s verti-cal magnet test facility reaching the LHC operating limits. This paper reports the latest results from MQXFS1 tests with changed pre-stress levels. The overall magnet performance, including quench training and memory, ramp rate and temperature depend-ence, is also summarized.« less
Measurements of the microwave spectrum, Re-H bond length, and Re quadrupole coupling for HRe(CO)5
NASA Astrophysics Data System (ADS)
Kukolich, Stephen G.; Sickafoose, Shane M.
1993-11-01
Rotational transition frequencies for rhenium pentacarbonyl hydride were measured in the 4-10 GHz range using a Flygare-Balle type microwave spectrometer. The rotational constants and Re nuclear quadrupole coupling constants for the four isotopomers, (1) H187Re(CO)5, (2) H185Re(CO)5, (3) D187Re(CO)5, and (4) D185Re(CO)5, were obtained from the spectra. For the most common isotopomer, B(1)=818.5464(2) MHz and eq Q(187Re)=-900.13(3) MHz. The Re-H bond length (r0) determined by fitting the rotational constants is 1.80(1) Å. Although the Re atom is located at a site of near-octahedral symmetry, the quadrupole coupling is large due to the large Re nuclear moments. A 2.7% increase in Re quadrupole coupling was observed for D-substituted isotopomers, giving a rather large isotope effect on the quadrupole coupling. The Cax-Re-Ceq angle is 96(1)°, when all Re-C-O angles are constrained to 180°.
The Rhic Azimuth Quadrupole:. "perfect Liquid" or Gluonic Radiation?
NASA Astrophysics Data System (ADS)
Trainor, Thomas A.
Large elliptic flow at RHIC seems to indicate that ideal hydrodynamics provides a good description of Au-Au collisions, at least at the maximum RHIC energy. The medium formed has been interpreted as a nearly perfect (low-viscosity) liquid, and connections have been made to gravitation through string theory. Recently, claimed observations of large flow fluctuations comparable to participant eccentricity fluctuations seem to confirm the ideal hydro scenario. However, determination of the azimuth quadrupole with 2D angular autocorrelations, which accurately distinguish "flow" (quadrupole) from "nonflow" (minijets), contradicts conventional interpretations. Centrality trends may depend only on the initial parton geometry, and methods used to isolate flow fluctuations are sensitive instead mainly to minijet correlations. The results presented in this paper suggest that the azimuth quadrupole may be a manifestation of gluonic multipole radiation.
Test results of the LARP Nb$$_3$$Sn quadrupole HQ03a
DiMarco, J.; G. Ambrosio; Chlachidze, G.; ...
2016-03-09
The US LHC Accelerator Research Program (LARP) has been developingmore » $$Nb_3Sn$$ quadrupoles of progressively increasing performance for the high luminosity upgrade of the Large Hadron Collider. The 120 mm aperture High-field Quadrupole (HQ) models are the last step in the R&D phase supporting the development of the new IR Quadrupoles (MQXF). Three series of HQ coils were fabricated and assembled in a shell-based support structure, progressively optimizing the design and fabrication process. The final set of coils consistently applied the optimized design solutions, and was assembled in the HQ03a model. Furthermore, this paper reports a summary of the HQ03a test results, including training, mechanical performance, field quality and quench studies.« less
The Neutron Star Mass Distribution
NASA Astrophysics Data System (ADS)
Kiziltan, Bülent; Kottas, Athanasios; De Yoreo, Maria; Thorsett, Stephen E.
2013-11-01
In recent years, the number of pulsars with secure mass measurements has increased to a level that allows us to probe the underlying neutron star (NS) mass distribution in detail. We critically review the radio pulsar mass measurements. For the first time, we are able to analyze a sizable population of NSs with a flexible modeling approach that can effectively accommodate a skewed underlying distribution and asymmetric measurement errors. We find that NSs that have evolved through different evolutionary paths reflect distinctive signatures through dissimilar distribution peak and mass cutoff values. NSs in double NS and NS-white dwarf (WD) systems show consistent respective peaks at 1.33 M ⊙ and 1.55 M ⊙, suggesting significant mass accretion (Δm ≈ 0.22 M ⊙) has occurred during the spin-up phase. The width of the mass distribution implied by double NS systems is indicative of a tight initial mass function while the inferred mass range is significantly wider for NSs that have gone through recycling. We find a mass cutoff at ~2.1 M ⊙ for NSs with WD companions, which establishes a firm lower bound for the maximum NS mass. This rules out the majority of strange quark and soft equation of state models as viable configurations for NS matter. The lack of truncation close to the maximum mass cutoff along with the skewed nature of the inferred mass distribution both enforce the suggestion that the 2.1 M ⊙ limit is set by evolutionary constraints rather than nuclear physics or general relativity, and the existence of rare supermassive NSs is possible.
Gärtner, Fania R; de Miranda, Esteriek; Rijnders, Marlies E; Freeman, Liv M; Middeldorp, Johanna M; Bloemenkamp, Kitty W M; Stiggelbout, Anne M; van den Akker-van Marle, M Elske
2015-10-01
To validate the Labor and Delivery Index (LADY-X), a new delivery-specific utility measure. In a test-retest design, women were surveyed online, 6 to 8 weeks postpartum and again 1 to 2 weeks later. For reliability testing, we assessed the standard error of measurement (S.E.M.) and the intraclass correlation coefficient (ICC). For construct validity, we tested hypotheses on the association with comparison instruments (Mackey Childbirth Satisfaction Rating Scale and Wijma Delivery Experience Questionnaire), both on domain and total score levels. We assessed known-group differences using eight obstetrical indicators: method and place of birth, induction, transfer, control over pain medication, complications concerning mother and child, and experienced control. The questionnaire was completed by 308 women, 257 (83%) completed the retest. The distribution of LADY-X scores was skewed. The reliability was good, as the ICC exceeded 0.80 and the S.E.M. was 0.76. Requirements for good construct validity were fulfilled: all hypotheses for convergent and divergent validity were confirmed, and six of eight hypotheses for known-group differences were confirmed as all differences were statistically significant (P-values: <0.001-0.023), but for two tests, difference scores did not exceed the S.E.M. The LADY-X demonstrates good reliability and construct validity. Despite its skewed distribution, the LADY-X can discriminate between groups. With the preference weights available, the LADY-X might fulfill the need for a utility measure for cost-effectiveness studies for perinatal care interventions. Copyright © 2015 Elsevier Inc. All rights reserved.
Kwon, Deukwoo; Reis, Isildinha M
2015-08-12
When conducting a meta-analysis of a continuous outcome, estimated means and standard deviations from the selected studies are required in order to obtain an overall estimate of the mean effect and its confidence interval. If these quantities are not directly reported in the publications, they must be estimated from other reported summary statistics, such as the median, the minimum, the maximum, and quartiles. We propose a simulation-based estimation approach using the Approximate Bayesian Computation (ABC) technique for estimating mean and standard deviation based on various sets of summary statistics found in published studies. We conduct a simulation study to compare the proposed ABC method with the existing methods of Hozo et al. (2005), Bland (2015), and Wan et al. (2014). In the estimation of the standard deviation, our ABC method performs better than the other methods when data are generated from skewed or heavy-tailed distributions. The corresponding average relative error (ARE) approaches zero as sample size increases. In data generated from the normal distribution, our ABC performs well. However, the Wan et al. method is best for estimating standard deviation under normal distribution. In the estimation of the mean, our ABC method is best regardless of assumed distribution. ABC is a flexible method for estimating the study-specific mean and standard deviation for meta-analysis, especially with underlying skewed or heavy-tailed distributions. The ABC method can be applied using other reported summary statistics such as the posterior mean and 95 % credible interval when Bayesian analysis has been employed.
A Computational Approach to Estimating Nondisjunction Frequency in Saccharomyces cerevisiae
Chu, Daniel B.; Burgess, Sean M.
2016-01-01
Errors segregating homologous chromosomes during meiosis result in aneuploid gametes and are the largest contributing factor to birth defects and spontaneous abortions in humans. Saccharomyces cerevisiae has long served as a model organism for studying the gene network supporting normal chromosome segregation. Measuring homolog nondisjunction frequencies is laborious, and involves dissecting thousands of tetrads to detect missegregation of individually marked chromosomes. Here we describe a computational method (TetFit) to estimate the relative contributions of meiosis I nondisjunction and random-spore death to spore inviability in wild type and mutant strains. These values are based on finding the best-fit distribution of 4, 3, 2, 1, and 0 viable-spore tetrads to an observed distribution. Using TetFit, we found that meiosis I nondisjunction is an intrinsic component of spore inviability in wild-type strains. We show proof-of-principle that the calculated average meiosis I nondisjunction frequency determined by TetFit closely matches empirically determined values in mutant strains. Using these published data sets, TetFit uncovered two classes of mutants: Class A mutants skew toward increased nondisjunction death, and include those with known defects in establishing pairing, recombination, and/or synapsis of homologous chromosomes. Class B mutants skew toward random spore death, and include those with defects in sister-chromatid cohesion and centromere function. Epistasis analysis using TetFit is facilitated by the low numbers of tetrads (as few as 200) required to compare the contributions to spore death in different mutant backgrounds. TetFit analysis does not require any special strain construction, and can be applied to previously observed tetrad distributions. PMID:26747203
Field measurements on skewed semi-integral bridge with elastic inclusion : instrumentation report.
DOT National Transportation Integrated Search
2006-01-01
This project was designed to enhance the Virginia Department of Transportation's expertise in the design of integral bridges, particularly as it applies to highly skewed structures. Specifically, the project involves extensive monitoring of a semi-in...
INVESTIGATION OF SEISMIC PERFORMANCE AND DESIGN OF TYPICAL CURVED AND SKEWED BRIDGES IN COLORADO
DOT National Transportation Integrated Search
2018-01-15
This report summarizes the analytical studies on the seismic performance of typical Colorado concrete bridges, particularly those with curved and skewed configurations. A set of bridge models with different geometric configurations derived from a pro...
Effect of implementing lean-on bracing in skewed steel I-girder bridges.
DOT National Transportation Integrated Search
2016-09-01
Skew of the supports in steel I-girder bridges cause undesirable torsional effects, increase cross-frame forces, and generally increase the difficulty of designing and : constructing a bridge. The girders experience differential deflections due to th...
Systems of Differential Equations with Skew-Symmetric, Orthogonal Matrices
ERIC Educational Resources Information Center
Glaister, P.
2008-01-01
The solution of a system of linear, inhomogeneous differential equations is discussed. The particular class considered is where the coefficient matrix is skew-symmetric and orthogonal, and where the forcing terms are sinusoidal. More general matrices are also considered.
Evaluation of selected warning signs at skewed railroad-highway crossings.
DOT National Transportation Integrated Search
1986-01-01
A 1984 study by the Research Council recommended that advance warning signs be placed in advance of skewed railroad-highway grade crossings. Several signs were suggested for use, and the study reported here was undertaken to determine the effectivene...
Knouft, Jason H
2004-05-01
Many taxonomic and ecological assemblages of species exhibit a right-skewed body size-frequency distribution when characterized at a regional scale. Although this distribution has been frequently described, factors influencing geographic variation in the distribution are not well understood, nor are mechanisms responsible for distribution shape. In this study, variation in the species body size-frequency distributions of 344 regional communities of North American freshwater fishes is examined in relation to latitude, species richness, and taxonomic composition. Although the distribution of all species of North American fishes is right-skewed, a negative correlation exists between latitude and regional community size distribution skewness, with size distributions becoming left-skewed at high latitudes. This relationship is not an artifact of the confounding relationship between latitude and species richness in North American fishes. The negative correlation between latitude and regional community size distribution skewness is partially due to the geographic distribution of families of fishes and apparently enhanced by a nonrandom geographic distribution of species within families. These results are discussed in the context of previous explanations of factors responsible for the generation of species size-frequency distributions related to the fractal nature of the environment, energetics, and evolutionary patterns of body size in North American fishes.
NASA Technical Reports Server (NTRS)
Herrington, J. R.; Estle, T. L.; Boatner, L. A.
1972-01-01
The observation and interpretation of weak EPR transitions, identified as 'forbidden' transitions, establish the existence of a new type of quadrupole interaction for cubic-symmetry imperfections. This interaction is simply a consequence of the ground-vibronic-state degeneracy. The signs as well as the magnitudes of the quadrupole-coupling coefficients are determined experimentally. These data agree well with the predictions of crystal field theory modified to account for a weak-to-moderate vibronic interaction (i.e., a dynamic Jahn-Teller effect).
Dynamical quadrupole structure factor of frustrated ferromagnetic chain
NASA Astrophysics Data System (ADS)
Onishi, Hiroaki
2018-05-01
We investigate the dynamical quadrupole structure factor of a spin-1/2 J1-J2 Heisenberg chain with competing ferromagnetic J1 and antiferromagnetic J2 in a magnetic field by exploiting density-matrix renormalization group techniques. In a field-induced spin nematic regime, we observe gapless excitations at q = π according to quasi-long-range antiferro-quadrupole correlations. The gapless excitation mode has a quadratic form at the saturation, while it changes into a linear dispersion as the magnetization decreases.
1997-08-01
77,719 TITLE OF THE INVENTION NUCLEAR QUADRUPOLE RESONANCE ( NQR ) METHOD AND PROBE FOR GENERATING RF MAGNETIC FIELDS IN DIFFERENT DIRECTIONS TO...DISTINGUISH NQR FROM ACOUSTIC RINGING INDUCED IN A SAMPLE BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a...nuclear quadrupole 15 resonance ( NQR ) method and probe for generating RF magnetic fields in different directions towards a sample. More specifically
Forces in wingwalls from thermal expansion of skewed semi-integral bridges.
DOT National Transportation Integrated Search
2010-11-01
Jointless bridges, such as semi-integral and integral bridges, have become more popular in recent years because of their simplicity in the construction and the elimination of high costs related to joint maintenance. Prior research has shown that skew...
Design of an upgradeable 45-100 mA RFQ accelerator for FAIR
NASA Astrophysics Data System (ADS)
Zhang, Chuan; Schempp, Alwin
2009-10-01
A 325 MHz, 35 mA, 3 MeV Radio-Frequency Quadrupole (RFQ) accelerator will be operated as the first accelerating structure of the proton linac injector for the newly planned international science center Facility for Antiproton and Ion Research (FAIR) at GSI, Germany. In previous design studies, two high beam intensities, 70 and 100 mA, were used. Most recently, the design intensity has been changed to 45 mA, which is closer to the operational value. Taking advantage of the so-called New Four-Section Procedure, a new design, which is upgradable from 45 to 100 mA, has been developed for the FAIR proton RFQ. Besides the upgradability analyses, robustness studies of the new design to spatial displacements of the input beam and field errors are presented as well.
Chalkley, Robert J; Baker, Peter R; Hansen, Kirk C; Medzihradszky, Katalin F; Allen, Nadia P; Rexach, Michael; Burlingame, Alma L
2005-08-01
An in-depth analysis of a multidimensional chromatography-mass spectrometry dataset acquired on a quadrupole selecting, quadrupole collision cell, time-of-flight (QqTOF) geometry instrument was carried out. A total of 3269 CID spectra were acquired. Through manual verification of database search results and de novo interpretation of spectra 2368 spectra could be confidently determined as predicted tryptic peptides. A detailed analysis of the non-matching spectra was also carried out, highlighting what the non-matching spectra in a database search typically are composed of. The results of this comprehensive dataset study demonstrate that QqTOF instruments produce information-rich data of which a high percentage of the data is readily interpretable.
Nuclear deformation in the laboratory frame
NASA Astrophysics Data System (ADS)
Gilbreth, C. N.; Alhassid, Y.; Bertsch, G. F.
2018-01-01
We develop a formalism for calculating the distribution of the axial quadrupole operator in the laboratory frame within the rotationally invariant framework of the configuration-interaction shell model. The calculation is carried out using a finite-temperature auxiliary-field quantum Monte Carlo method. We apply this formalism to isotope chains of even-mass samarium and neodymium nuclei and show that the quadrupole distribution provides a model-independent signature of nuclear deformation. Two technical advances are described that greatly facilitate the calculations. The first is to exploit the rotational invariance of the underlying Hamiltonian to reduce the statistical fluctuations in the Monte Carlo calculations. The second is to determine quadruple invariants from the distribution of the axial quadrupole operator in the laboratory frame. This allows us to extract effective values of the intrinsic quadrupole shape parameters without invoking an intrinsic frame or a mean-field approximation.
An investigation of safety problems at skewed rail-highway grade crossings.
DOT National Transportation Integrated Search
1984-01-01
Skewed rail-highway grade crossings can be a safety problem because of the restrictions which the angle of crossing may place upon a motorist's ability to detect an oncoming train and because of the potential roadway hazard which the use of flangeway...
DOT National Transportation Integrated Search
2016-12-01
Damage to skewed and curved bridges during strong earthquakes is documented. This project investigates whether such damage could be mitigated by using buckling restrained braces. Nonlinear models show that using buckling restrained braces to mitigate...
DOT National Transportation Integrated Search
2009-10-01
The research presented herein describes the field verification for the effectiveness of continuity diaphragms for : skewed continuous precast, prestressed, concrete girder bridges. The objectives of this research are (1) to perform : field load testi...
Design study for multi-channel tape recorder system, volume 2
NASA Technical Reports Server (NTRS)
1972-01-01
Skew test data are presented on a tape recorder transport with a double capstan drive for a 100 KHz tone recorded on five tracks simultaneously. Phase detectors were used to measure the skew when the center channel was the 100 KHz reference.
DOT National Transportation Integrated Search
2016-12-01
The objective of this project is to find effective configurations for using buckling restrained braces (BRBs) in both skewed and curved bridges for reducing the effects of strong earthquakes. Verification is performed by numerical simulation using an...
The Equilibrium Allele Frequency Distribution for a Population with Reproductive Skew
Der, Ricky; Plotkin, Joshua B.
2014-01-01
We study the population genetics of two neutral alleles under reversible mutation in a model that features a skewed offspring distribution, called the Λ-Fleming–Viot process. We describe the shape of the equilibrium allele frequency distribution as a function of the model parameters. We show that the mutation rates can be uniquely identified from this equilibrium distribution, but the form of the offspring distribution cannot itself always be so identified. We introduce an estimator for the mutation rate that is consistent, independent of the form of reproductive skew. We also introduce a two-allele infinite-sites version of the Λ-Fleming–Viot process, and we use it to study how reproductive skew influences standing genetic diversity in a population. We derive asymptotic formulas for the expected number of segregating sites as a function of sample size and offspring distribution. We find that the Wright–Fisher model minimizes the equilibrium genetic diversity, for a given mutation rate and variance effective population size, compared to all other Λ-processes. PMID:24473932
NASA Astrophysics Data System (ADS)
Castle, James R.; CMS Collaboration
2017-11-01
Flow harmonic fluctuations are studied for PbPb collisions at √{sNN} = 5.02 TeV using the CMS detector at the LHC. Flow harmonic probability distributions p(v2) are obtained by unfolding smearing effects from observed azimuthal anisotropy distributions using particles of 0.3
DOE Office of Scientific and Technical Information (OSTI.GOV)
Watermann, J.; McNamara, A.G.; Sofko, G.J.
Some 7,700 radio aurora spectra obtained from a six link 50-MHz CW radar network set up on the Canadian prairies were analyzed with respect to the distributions of mean Doppler shift, spectral width and skewness. A comparison with recently published SABRE results obtained at 153 MHz shows substantial differences in the distributions which are probably due to different experimental and geophysical conditions. The spectra are mostly broad with mean Doppler shifts close to zero (type II spectra). The typical groupings of type I and type III spectra are clearly identified. All types appear to be in general much more symmetricmore » than those recorded with SABRE, and the skewness is only weakly dependent on the sign of the mean Doppler shift. Its distribution peaks near zero and shows a weak positive correlation with the type II Doppler shifts while the mostly positive type I Doppler shifts are slightly negatively correlated with the skewness.« less
Few Skewed Results from IOTA Interferometer YSO Disk Survey
NASA Astrophysics Data System (ADS)
Monnier, J. D.; Millan-Gabet, R.; Berger, J.-P.; Pedretti, E.; Traub, W.; Schloerb, F. P.
2005-12-01
The 3-telescope IOTA interferometer is capable of measuring closure phases for dozens of Herbig Ae/Be stars in the near-infrared. The closure phase unambiguously identifies deviations from centro-symmetry (i.e., skew) in the brightness distribution, at the scale of 4 milliarcseconds (sub-AU physical scales) for our work. Indeed, hot dust emission from the inner circumstellar accretion disk is expected to be skewed for (generic) flared disks viewed at intermediate inclination angles, as has been observed for LkHa 101. Surprisingly, we find very little evidence for skewed disk emission in our IOTA3 sample, setting strong constraints on the geometry of the inner disk. In particular, we rule out the currently-popular model of a VERTICAL hot inner wall of dust at the sublimation radius. Instead, our data is more consistent with a curved inner wall that bends away from the midplane as might be expected from the pressure-dependence of dust sublimation or limited absorption of stellar luminosity in the disk midplane by gas.
Testing the Binary Black Hole Nature of a Compact Binary Coalescence
NASA Astrophysics Data System (ADS)
Krishnendu, N. V.; Arun, K. G.; Mishra, Chandra Kant
2017-09-01
We propose a novel method to test the binary black hole nature of compact binaries detectable by gravitational wave (GW) interferometers and, hence, constrain the parameter space of other exotic compact objects. The spirit of the test lies in the "no-hair" conjecture for black holes where all properties of a Kerr black hole are characterized by its mass and spin. The method relies on observationally measuring the quadrupole moments of the compact binary constituents induced due to their spins. If the compact object is a Kerr black hole (BH), its quadrupole moment is expressible solely in terms of its mass and spin. Otherwise, the quadrupole moment can depend on additional parameters (such as the equation of state of the object). The higher order spin effects in phase and amplitude of a gravitational waveform, which explicitly contains the spin-induced quadrupole moments of compact objects, hence, uniquely encode the nature of the compact binary. Thus, we argue that an independent measurement of the spin-induced quadrupole moment of the compact binaries from GW observations can provide a unique way to distinguish binary BH systems from binaries consisting of exotic compact objects.
NASA Astrophysics Data System (ADS)
Yazyev, Oleg V.; Helm, Lothar
2006-08-01
Rotational correlation times of metal ion aqua complexes can be determined from O17 NMR relaxation rates if the quadrupole coupling constant of the bound water oxygen-17 nucleus is known. The rotational correlation time is an important parameter for the efficiency of Gd3+ complexes as magnetic resonance imaging contrast agents. Using a combination of density functional theory with classical and Car-Parrinello molecular dynamics simulations we performed a computational study of the O17 quadrupole coupling constants in model aqua ions and the [Gd(DOTA)(H2O)]- complex used in clinical diagnostics. For the inner sphere water molecule in the [Gd(DOTA)(H2O)]- complex the determined quadrupole coupling parameter χ√1+η2/3 of 8.7MHz is very similar to that of the liquid water (9.0MHz ). Very close values were also predicted for the the homoleptic aqua ions of Gd3+ and Ca2+. We conclude that the O17 quadrupole coupling parameters of water molecules coordinated to closed shell and lanthanide metal ions are similar to water molecules in the liquid state.
The importance of quadrupole sources in prediction of transonic tip speed propeller noise
NASA Technical Reports Server (NTRS)
Hanson, D. B.; Fink, M. R.
1978-01-01
A theoretical analysis is presented for the harmonic noise of high speed, open rotors. Far field acoustic radiation equations based on the Ffowcs-Williams/Hawkings theory are derived for a static rotor with thin blades and zero lift. Near the plane of rotation, the dominant sources are the volume displacement and the rho U(2) quadrupole, where u is the disturbance velocity component in the direction blade motion. These sources are compared in both the time domain and the frequency domain using two dimensional airfoil theories valid in the subsonic, transonic, and supersonic speed ranges. For nonlifting parabolic arc blades, the two sources are equally important at speeds between the section critical Mach number and a Mach number of one. However, for moderately subsonic or fully supersonic flow over thin blade sections, the quadrupole term is negligible. It is concluded for thin blades that significant quadrupole noise radiation is strictly a transonic phenomenon and that it can be suppressed with blade sweep. Noise calculations are presented for two rotors, one simulating a helicopter main rotor and the other a model propeller. For the latter, agreement with test data was substantially improved by including the quadrupole source term.
Testing the Binary Black Hole Nature of a Compact Binary Coalescence.
Krishnendu, N V; Arun, K G; Mishra, Chandra Kant
2017-09-01
We propose a novel method to test the binary black hole nature of compact binaries detectable by gravitational wave (GW) interferometers and, hence, constrain the parameter space of other exotic compact objects. The spirit of the test lies in the "no-hair" conjecture for black holes where all properties of a Kerr black hole are characterized by its mass and spin. The method relies on observationally measuring the quadrupole moments of the compact binary constituents induced due to their spins. If the compact object is a Kerr black hole (BH), its quadrupole moment is expressible solely in terms of its mass and spin. Otherwise, the quadrupole moment can depend on additional parameters (such as the equation of state of the object). The higher order spin effects in phase and amplitude of a gravitational waveform, which explicitly contains the spin-induced quadrupole moments of compact objects, hence, uniquely encode the nature of the compact binary. Thus, we argue that an independent measurement of the spin-induced quadrupole moment of the compact binaries from GW observations can provide a unique way to distinguish binary BH systems from binaries consisting of exotic compact objects.
NASA Astrophysics Data System (ADS)
Zhong, Rong-Xuan; Huang, Nan; Li, Huang-Wu; He, He-Xiang; Lü, Jian-Tao; Huang, Chun-Qing; Chen, Zhao-Pin
2018-04-01
We numerically and analytically investigate the formations and features of two-dimensional discrete Bose-Einstein condensate solitons, which are constructed by quadrupole-quadrupole interactional particles trapped in the tunable anisotropic discrete optical lattices. The square optical lattices in the model can be formed by two pairs of interfering plane waves with different intensities. Two hopping rates of the particles in the orthogonal directions are different, which gives rise to a linear anisotropic system. We find that if all of the pairs of dipole and anti-dipole are perpendicular to the lattice panel and the line connecting the dipole and anti-dipole which compose the quadrupole is parallel to horizontal direction, both the linear anisotropy and the nonlocal nonlinear one can strongly influence the formations of the solitons. There exist three patterns of stable solitons, namely horizontal elongation quasi-one-dimensional discrete solitons, disk-shape isotropic pattern solitons and vertical elongation quasi-continuous solitons. We systematically demonstrate the relationships of chemical potential, size and shape of the soliton with its total norm and vertical hopping rate and analytically reveal the linear dispersion relation for quasi-one-dimensional discrete solitons.
Near-field shock formation in noise propagation from a high-power jet aircraft.
Gee, Kent L; Neilsen, Tracianne B; Downing, J Micah; James, Michael M; McKinley, Richard L; McKinley, Robert C; Wall, Alan T
2013-02-01
Noise measurements near the F-35A Joint Strike Fighter at military power are analyzed via spatial maps of overall and band pressure levels and skewness. Relative constancy of the pressure waveform skewness reveals that waveform asymmetry, characteristic of supersonic jets, is a source phenomenon originating farther upstream than the maximum overall level. Conversely, growth of the skewness of the time derivative with distance indicates that acoustic shocks largely form through the course of near-field propagation and are not generated explicitly by a source mechanism. These results potentially counter previous arguments that jet "crackle" is a source phenomenon.
A fast-locking all-digital delay-locked loop for phase/delay generation in an FPGA
NASA Astrophysics Data System (ADS)
Zhujia, Chen; Haigang, Yang; Fei, Liu; Yu, Wang
2011-10-01
A fast-locking all-digital delay-locked loop (ADDLL) is proposed for the DDR SDRAM controller interface in a field programmable gate array (FPGA). The ADDLL performs a 90° phase-shift so that the data strobe (DQS) can enlarge the data valid window in order to minimize skew. In order to further reduce the locking time and to prevent the harmonic locking problem, a time-to-digital converter (TDC) is proposed. A duty cycle corrector (DCC) is also designed in the ADDLL to adjust the output duty cycle to 50%. The ADDLL, implemented in a commercial 0.13 μm CMOS process, occupies a total of 0.017 mm2 of active area. Measurement results show that the ADDLL has an operating frequency range of 75 to 350 MHz and a total delay resolution of 15 ps. The time interval error (TIE) of the proposed circuit is 60.7 ps.
Regression away from the mean: Theory and examples.
Schwarz, Wolf; Reike, Dennis
2018-02-01
Using a standard repeated measures model with arbitrary true score distribution and normal error variables, we present some fundamental closed-form results which explicitly indicate the conditions under which regression effects towards (RTM) and away from the mean are expected. Specifically, we show that for skewed and bimodal distributions many or even most cases will show a regression effect that is in expectation away from the mean, or that is not just towards but actually beyond the mean. We illustrate our results in quantitative detail with typical examples from experimental and biometric applications, which exhibit a clear regression away from the mean ('egression from the mean') signature. We aim not to repeal cautionary advice against potential RTM effects, but to present a balanced view of regression effects, based on a clear identification of the conditions governing the form that regression effects take in repeated measures designs. © 2017 The British Psychological Society.
Robust Mediation Analysis Based on Median Regression
Yuan, Ying; MacKinnon, David P.
2014-01-01
Mediation analysis has many applications in psychology and the social sciences. The most prevalent methods typically assume that the error distribution is normal and homoscedastic. However, this assumption may rarely be met in practice, which can affect the validity of the mediation analysis. To address this problem, we propose robust mediation analysis based on median regression. Our approach is robust to various departures from the assumption of homoscedasticity and normality, including heavy-tailed, skewed, contaminated, and heteroscedastic distributions. Simulation studies show that under these circumstances, the proposed method is more efficient and powerful than standard mediation analysis. We further extend the proposed robust method to multilevel mediation analysis, and demonstrate through simulation studies that the new approach outperforms the standard multilevel mediation analysis. We illustrate the proposed method using data from a program designed to increase reemployment and enhance mental health of job seekers. PMID:24079925
Enhanced risk management by an emerging multi-agent architecture
NASA Astrophysics Data System (ADS)
Lin, Sin-Jin; Hsu, Ming-Fu
2014-07-01
Classification in imbalanced datasets has attracted much attention from researchers in the field of machine learning. Most existing techniques tend not to perform well on minority class instances when the dataset is highly skewed because they focus on minimising the forecasting error without considering the relative distribution of each class. This investigation proposes an emerging multi-agent architecture, grounded on cooperative learning, to solve the class-imbalanced classification problem. Additionally, this study deals further with the obscure nature of the multi-agent architecture and expresses comprehensive rules for auditors. The results from this study indicate that the presented model performs satisfactorily in risk management and is able to tackle a highly class-imbalanced dataset comparatively well. Furthermore, the knowledge visualised process, supported by real examples, can assist both internal and external auditors who must allocate limited detecting resources; they can take the rules as roadmaps to modify the auditing programme.
Overcoming thermal noise in non-volatile spin wave logic.
Dutta, Sourav; Nikonov, Dmitri E; Manipatruni, Sasikanth; Young, Ian A; Naeemi, Azad
2017-05-15
Spin waves are propagating disturbances in magnetically ordered materials, analogous to lattice waves in solid systems and are often described from a quasiparticle point of view as magnons. The attractive advantages of Joule-heat-free transmission of information, utilization of the phase of the wave as an additional degree of freedom and lower footprint area compared to conventional charge-based devices have made spin waves or magnon spintronics a promising candidate for beyond-CMOS wave-based computation. However, any practical realization of an all-magnon based computing system must undergo the essential steps of a careful selection of materials and demonstrate robustness with respect to thermal noise or variability. Here, we aim at identifying suitable materials and theoretically demonstrate the possibility of achieving error-free clocked non-volatile spin wave logic device, even in the presence of thermal noise and clock jitter or clock skew.
Assessing medication effects in the MTA study using neuropsychological outcomes.
Epstein, Jeffery N; Conners, C Keith; Hervey, Aaron S; Tonev, Simon T; Arnold, L Eugene; Abikoff, Howard B; Elliott, Glen; Greenhill, Laurence L; Hechtman, Lily; Hoagwood, Kimberly; Hinshaw, Stephen P; Hoza, Betsy; Jensen, Peter S; March, John S; Newcorn, Jeffrey H; Pelham, William E; Severe, Joanne B; Swanson, James M; Wells, Karen; Vitiello, Benedetto; Wigal, Timothy
2006-05-01
While studies have increasingly investigated deficits in reaction time (RT) and RT variability in children with attention deficit/hyperactivity disorder (ADHD), few studies have examined the effects of stimulant medication on these important neuropsychological outcome measures. 316 children who participated in the Multimodal Treatment Study of Children with ADHD (MTA) completed the Conners' Continuous Performance Test (CPT) at the 24-month assessment point. Outcome measures included standard CPT outcomes (e.g., errors of commission, mean hit reaction time (RT)) and RT indicators derived from an Ex-Gaussian distributional model (i.e., mu, sigma, and tau). Analyses revealed significant effects of medication across all neuropsychological outcome measures. Results on the Ex-Gaussian outcome measures revealed that stimulant medication slows RT and reduces RT variability. This demonstrates the importance of including analytic strategies that can accurately model the actual distributional pattern, including the positive skew. Further, the results of the study relate to several theoretical models of ADHD.
A comparison of CMG steering laws for High Energy Astronomy Observatories (HEAOs)
NASA Technical Reports Server (NTRS)
Davis, B. G.
1972-01-01
A comparison of six selected control moment gyro steering laws for use on the HEAO spacecraft is reported. Basic equations are developed to project the momentum and torque of four skewed, single gimbal CMGs into vehicle coordinates. In response to the spacecraft attitude error signal, six algorithms are derived for controlling the CMG gimbal movements. HEAO performance data are obtained using each steering law and compared on the basis of such factors as accuracy, complexity, singularities, gyro hang-up and failure adaption. Moreover, each law is simulated with and without a magnetic momentum management system. The performance of any steering law is enhanced by the magnetic system. Without magnetics, the gimbal angles get large and there are significant differences in steering law performances due to cross coupling and nonlinearities. The performance of the pseudo inverse law is recommended for HEAO.
Multiple shadows from distorted static black holes
NASA Astrophysics Data System (ADS)
Grover, Jai; Kunz, Jutta; Nedkova, Petya; Wittig, Alexander; Yazadjiev, Stoytcho
2018-04-01
We study the local shadow of the Schwarzschild black hole with a quadrupole distortion and the influence of the external gravitational field on the photon dynamics. The external matter sources modify the light ring structure and lead to the appearance of multiple shadow images. In the case of negative quadrupole moments we identify the most prominent mechanism causing multiple shadow formation. Furthermore, we obtain a condition under which this mechanism can be realized. This condition depends on the quadrupole moment, but also on the position of the observer and the celestial sphere.
Kinship and Incest Avoidance Drive Patterns of Reproductive Skew in Cooperatively Breeding Birds.
Riehl, Christina
2017-12-01
Social animals vary in how reproduction is divided among group members, ranging from monopolization by a dominant pair (high skew) to equal sharing by cobreeders (low skew). Despite many theoretical models, the ecological and life-history factors that generate this variation are still debated. Here I analyze data from 83 species of cooperatively breeding birds, finding that kinship within the breeding group is a powerful predictor of reproductive sharing across species. Societies composed of nuclear families have significantly higher skew than those that contain unrelated members, a pattern that holds for both multimale and multifemale groups. Within-species studies confirm this, showing that unrelated subordinates of both sexes are more likely to breed than related subordinates are. Crucially, subordinates in cooperative groups are more likely to breed if they are unrelated to the opposite-sex dominant, whereas relatedness to the same-sex dominant has no effect. This suggests that incest avoidance, rather than suppression by dominant breeders, may be an important proximate mechanism limiting reproduction by subordinates. Overall, these results support the ultimate evolutionary logic behind concessions models of skew-namely, that related subordinates gain indirect fitness benefits from helping at the nests of kin, so a lower direct reproductive share is required for selection to favor helping over dispersal-but not the proximate mechanism of dominant control assumed by these models.
Miller, K A; Nelson, N J; Smith, H G; Moore, J A
2009-09-01
Reduced genetic diversity can result in short-term decreases in fitness and reduced adaptive potential, which may lead to an increased extinction risk. Therefore, maintaining genetic variation is important for the short- and long-term success of reintroduced populations. Here, we evaluate how founder group size and variance in male reproductive success influence the long-term maintenance of genetic diversity after reintroduction. We used microsatellite data to quantify the loss of heterozygosity and allelic diversity in the founder groups from three reintroductions of tuatara (Sphenodon), the sole living representatives of the reptilian order Rhynchocephalia. We then estimated the maintenance of genetic diversity over 400 years (approximately 10 generations) using population viability analyses. Reproduction of tuatara is highly skewed, with as few as 30% of males mating across years. Predicted losses of heterozygosity over 10 generations were low (1-14%), and populations founded with more animals retained a greater proportion of the heterozygosity and allelic diversity of their source populations and founder groups. Greater male reproductive skew led to greater predicted losses of genetic diversity over 10 generations, but only accelerated the loss of genetic diversity at small population size (<250 animals). A reduction in reproductive skew at low density may facilitate the maintenance of genetic diversity in small reintroduced populations. If reproductive skew is high and density-independent, larger founder groups could be released to achieve genetic goals for management.
Tan, Kun; An, Lei; Miao, Kai; Ren, Likun; Hou, Zhuocheng; Tao, Li; Zhang, Zhenni; Wang, Xiaodong; Xia, Wei; Liu, Jinghao; Wang, Zhuqing; Xi, Guangyin; Gao, Shuai; Sui, Linlin; Zhu, De-Sheng; Wang, Shumin; Wu, Zhonghong; Bach, Ingolf; Chen, Dong-bao; Tian, Jianhui
2016-01-01
Dynamic epigenetic reprogramming occurs during normal embryonic development at the preimplantation stage. Erroneous epigenetic modifications due to environmental perturbations such as manipulation and culture of embryos during in vitro fertilization (IVF) are linked to various short- or long-term consequences. Among these, the skewed sex ratio, an indicator of reproductive hazards, was reported in bovine and porcine embryos and even human IVF newborns. However, since the first case of sex skewing reported in 1991, the underlying mechanisms remain unclear. We reported herein that sex ratio is skewed in mouse IVF offspring, and this was a result of female-biased peri-implantation developmental defects that were originated from impaired imprinted X chromosome inactivation (iXCI) through reduced ring finger protein 12 (Rnf12)/X-inactive specific transcript (Xist) expression. Compensation of impaired iXCI by overexpression of Rnf12 to up-regulate Xist significantly rescued female-biased developmental defects and corrected sex ratio in IVF offspring. Moreover, supplementation of an epigenetic modulator retinoic acid in embryo culture medium up-regulated Rnf12/Xist expression, improved iXCI, and successfully redeemed the skewed sex ratio to nearly 50% in mouse IVF offspring. Thus, our data show that iXCI is one of the major epigenetic barriers for the developmental competence of female embryos during preimplantation stage, and targeting erroneous epigenetic modifications may provide a potential approach for preventing IVF-associated complications. PMID:26951653
McFadden, J P; Thyssen, J P; Basketter, D A; Puangpet, P; Kimber, I
2015-03-01
During the last 50 years there has been a significant increase in Western societies of atopic disease and associated allergy. The balance between functional subpopulations of T helper cells (Th) determines the quality of the immune response provoked by antigen. One such subpopulation - Th2 cells - is associated with the production of IgE antibody and atopic allergy, whereas, Th1 cells antagonize IgE responses and the development of allergic disease. In seeking to provide a mechanistic basis for this increased prevalence of allergic disease, one proposal has been the 'hygiene hypothesis', which argues that in Westernized societies reduced exposure during early childhood to pathogenic microorganisms favours the development of atopic allergy. Pregnancy is normally associated with Th2 skewing, which persists for some months in the neonate before Th1/Th2 realignment occurs. In this review, we consider the immunophysiology of Th2 immune skewing during pregnancy. In particular, we explore the possibility that altered and increased patterns of exposure to certain chemicals have served to accentuate this normal Th2 skewing and therefore further promote the persistence of a Th2 bias in neonates. Furthermore, we propose that the more marked Th2 skewing observed in first pregnancy may, at least in part, explain the higher prevalence of atopic disease and allergy in the first born. © 2014 British Association of Dermatologists.
Stable estimate of primary OC/EC ratios in the EC tracer method
NASA Astrophysics Data System (ADS)
Chu, Shao-Hang
In fine particulate matter studies, the primary OC/EC ratio plays an important role in estimating the secondary organic aerosol contribution to PM2.5 concentrations using the EC tracer method. In this study, numerical experiments are carried out to test and compare various statistical techniques in the estimation of primary OC/EC ratios. The influence of random measurement errors in both primary OC and EC measurements on the estimation of the expected primary OC/EC ratios is examined. It is found that random measurement errors in EC generally create an underestimation of the slope and an overestimation of the intercept of the ordinary least-squares regression line. The Deming regression analysis performs much better than the ordinary regression, but it tends to overcorrect the problem by slightly overestimating the slope and underestimating the intercept. Averaging the ratios directly is usually undesirable because the average is strongly influenced by unrealistically high values of OC/EC ratios resulting from random measurement errors at low EC concentrations. The errors generally result in a skewed distribution of the OC/EC ratios even if the parent distributions of OC and EC are close to normal. When measured OC contains a significant amount of non-combustion OC Deming regression is a much better tool and should be used to estimate both the primary OC/EC ratio and the non-combustion OC. However, if the non-combustion OC is negligibly small the best and most robust estimator of the OC/EC ratio turns out to be the simple ratio of the OC and EC averages. It not only reduces random errors by averaging individual variables separately but also acts as a weighted average of ratios to minimize the influence of unrealistically high OC/EC ratios created by measurement errors at low EC concentrations. The median of OC/EC ratios ranks a close second, and the geometric mean of ratios ranks third. This is because their estimations are insensitive to questionable extreme values. A real world example is given using the ambient data collected from an Atlanta STN site during the winter of 2001-2002.
Study of a micro chamber quadrupole mass spectrometer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang Jinchan; Zhang Xiaobing; Mao Fuming
The design of a micro chamber quadrupole mass spectrometer (MCQMS) having a small total volume of only 20 cm{sup 3}, including Faraday cup ion detector and ion source, is described. This MCQMS can resist a vacuum baking temperature of 400-500 deg. C. The quadrupole elements with a hyperbolic surface are made of a ceramic material and coated with a thin metal layer. The quadrupole mass filter has a field radius of 3 mm and a length of 100 mm. Prototypes of this new MCQMS can detect a minimum partial pressure of 10{sup -8} Pa, have a peak width of {delta}M=1more » at 10% peak height from mass number 1 to 60, and show an excellent long-term stability. The new MCQMS is intended to be used in residual gas analyses of electron devices during a mutual pumping and baking process.« less
Luo, Chan; Jiang, Dan; Ding, Chuan-Fan; Konenkov, Nikolai V
2009-09-01
Numeric experiments were performed to study the first and second stability regions and find the optimal configurations of a quadrupole mass filter constructed of circular quadrupole rods with a rectangular wave power supply. The ion transmission contours were calculated using ion trajectory simulations. For the first stability region, the optimal rod set configuration and the ratio r/r(0) is 1.110-1.115; for the second stability region, it is 1.128-1.130. Low-frequency direct current (DC) modulation with the parameters of m = 0.04-0.16 and nu = omega/Omega = 1/8-1/14 improves the mass peak shape of the circular rod quadrupole mass filter at the optimal r/r(0) ratio of 1.130. The amplitude modulation does not improve mass peak shape. Copyright (c) 2009 John Wiley & Sons, Ltd.
Development of a GC/Quadrupole-Orbitrap Mass Spectrometer, Part I: Design and Characterization
2015-01-01
Identification of unknown compounds is of critical importance in GC/MS applications (metabolomics, environmental toxin identification, sports doping, petroleomics, and biofuel analysis, among many others) and remains a technological challenge. Derivation of elemental composition is the first step to determining the identity of an unknown compound by MS, for which high accuracy mass and isotopomer distribution measurements are critical. Here, we report on the development of a dedicated, applications-grade GC/MS employing an Orbitrap mass analyzer, the GC/Quadrupole-Orbitrap. Built from the basis of the benchtop Orbitrap LC/MS, the GC/Quadrupole-Orbitrap maintains the performance characteristics of the Orbitrap, enables quadrupole-based isolation for sensitive analyte detection, and includes numerous analysis modalities to facilitate structural elucidation. We detail the design and construction of the instrument, discuss its key figures-of-merit, and demonstrate its performance for the characterization of unknown compounds and environmental toxins. PMID:25208235
Microfluidic quadrupole and floating concentration gradient.
Qasaimeh, Mohammad A; Gervais, Thomas; Juncker, David
2011-09-06
The concept of fluidic multipoles, in analogy to electrostatics, has long been known as a particular class of solutions of the Navier-Stokes equation in potential flows; however, experimental observations of fluidic multipoles and of their characteristics have not been reported yet. Here we present a two-dimensional microfluidic quadrupole and a theoretical analysis consistent with the experimental observations. The microfluidic quadrupole was formed by simultaneously injecting and aspirating fluids from two pairs of opposing apertures in a narrow gap formed between a microfluidic probe and a substrate. A stagnation point was formed at the centre of the microfluidic quadrupole, and its position could be rapidly adjusted hydrodynamically. Following the injection of a solute through one of the poles, a stationary, tunable, and movable-that is, 'floating'-concentration gradient was formed at the stagnation point. Our results lay the foundation for future combined experimental and theoretical exploration of microfluidic planar multipoles including convective-diffusive phenomena.
NASA Astrophysics Data System (ADS)
Liu, Guo-Chin; Ichiki, Kiyotomo; Tashiro, Hiroyuki; Sugiyama, Naoshi
2016-07-01
Scattering of cosmic microwave background (CMB) radiation in galaxy clusters induces polarization signals determined by the quadrupole anisotropy in the photon distribution at the location of clusters. This `remote quadrupole' derived from the measurements of the induced polarization in galaxy clusters provides an opportunity to reconstruct local CMB temperature anisotropies. In this Letter, we develop an algorithm of the reconstruction through the estimation of the underlying primordial gravitational potential, which is the origin of the CMB temperature and polarization fluctuations and CMB induced polarization in galaxy clusters. We found a nice reconstruction for the quadrupole and octopole components of the CMB temperature anisotropies with the assistance of the CMB induced polarization signals. The reconstruction can be an important consistency test on the puzzles of CMB anomalies, especially for the low-quadrupole and axis-of-evil problems reported in Wilkinson Microwave Anisotropy Probe and Planck data.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Uzdensky, Dmitri A.; Kulsrud, Russell M.
2006-06-15
A quadrupole pattern of the out-of-plane component of the magnetic field inside a reconnection region is seen as an important signature of the Hall-magnetohydrodynamic regime of reconnection. It has been first observed in numerical simulations and just recently confirmed in the Magnetic Reconnection Experiment [Y. Ren, M. Yamada, S. Gerhardt, H. Ji, R. Kulsrud, and A. Kuritsin, Phys. Rev. Lett. 95, 055003 (2005)] and also seen in spacecraft observations of Earth's magnetosphere. In this study, the physical origin of the quadrupole field is analyzed and traced to a current of electrons that flows along the lines in and out ofmore » the inner reconnection region to maintain charge neutrality. The role of the quadrupole magnetic field in the overall dynamics of the reconnection process is discussed. In addition, the bipolar poloidal electric field is estimated and its effect on ion motions is emphasized.« less
NASA Astrophysics Data System (ADS)
Mortensen, Dale J.
1995-04-01
The testing and performance of a prototype modem developed at NASA Lewis Research Center for high-speed free-space direct detection optical communications is described. The testing was performed under laboratory conditions using computer control with specially developed test equipment that simulates free-space link conditions. The modem employs quaternary pulse position modulation at 325 Megabits per second (Mbps) on two optical channels, which are multiplexed to transmit a single 650 Mbps data stream. The measured results indicate that the receiver's automatic gain control (AGC), phased-locked-loop slot clock recovery, digital symbol clock recovery, matched filtering, and maximum likelihood data recovery circuits were found to have only 1.5 dB combined implementation loss during bit-error-rate (BER) performance measurements. Pseudo random bit sequences and real-time high quality video sources were used to supply 650 Mbps and 325 Mbps data streams to the modem. Additional testing revealed that Doppler frequency shifting can be easily tracked by the receiver, that simulated pointing errors are readily compensated for by the AGC circuits, and that channel timing skew affects the BER performance in an expected manner. Overall, the needed technologies for a high-speed laser communications modem were demonstrated.
Naya, Hugo; Urioste, Jorge I; Chang, Yu-Mei; Rodrigues-Motta, Mariana; Kremer, Roberto; Gianola, Daniel
2008-01-01
Dark spots in the fleece area are often associated with dark fibres in wool, which limits its competitiveness with other textile fibres. Field data from a sheep experiment in Uruguay revealed an excess number of zeros for dark spots. We compared the performance of four Poisson and zero-inflated Poisson (ZIP) models under four simulation scenarios. All models performed reasonably well under the same scenario for which the data were simulated. The deviance information criterion favoured a Poisson model with residual, while the ZIP model with a residual gave estimates closer to their true values under all simulation scenarios. Both Poisson and ZIP models with an error term at the regression level performed better than their counterparts without such an error. Field data from Corriedale sheep were analysed with Poisson and ZIP models with residuals. Parameter estimates were similar for both models. Although the posterior distribution of the sire variance was skewed due to a small number of rams in the dataset, the median of this variance suggested a scope for genetic selection. The main environmental factor was the age of the sheep at shearing. In summary, age related processes seem to drive the number of dark spots in this breed of sheep. PMID:18558072
NASA Technical Reports Server (NTRS)
Morrell, Frederick R.; Bailey, Melvin L.
1987-01-01
A vector-based failure detection and isolation technique for a skewed array of two degree-of-freedom inertial sensors is developed. Failure detection is based on comparison of parity equations with a threshold, and isolation is based on comparison of logic variables which are keyed to pass/fail results of the parity test. A multi-level approach to failure detection is used to ensure adequate coverage for the flight control, display, and navigation avionics functions. Sensor error models are introduced to expose the susceptibility of the parity equations to sensor errors and physical separation effects. The algorithm is evaluated in a simulation of a commercial transport operating in a range of light to severe turbulence environments. A bias-jump failure level of 0.2 deg/hr was detected and isolated properly in the light and moderate turbulence environments, but not detected in the extreme turbulence environment. An accelerometer bias-jump failure level of 1.5 milli-g was detected over all turbulence environments. For both types of inertial sensor, hard-over, and null type failures were detected in all environments without incident. The algorithm functioned without false alarm or isolation over all turbulence environments for the runs tested.
Gao, Zhiyuan; Yang, Congjie; Xu, Jiangtao; Nie, Kaiming
2015-11-06
This paper presents a dynamic range (DR) enhanced readout technique with a two-step time-to-digital converter (TDC) for high speed linear CMOS image sensors. A multi-capacitor and self-regulated capacitive trans-impedance amplifier (CTIA) structure is employed to extend the dynamic range. The gain of the CTIA is auto adjusted by switching different capacitors to the integration node asynchronously according to the output voltage. A column-parallel ADC based on a two-step TDC is utilized to improve the conversion rate. The conversion is divided into coarse phase and fine phase. An error calibration scheme is also proposed to correct quantization errors caused by propagation delay skew within -T(clk)~+T(clk). A linear CMOS image sensor pixel array is designed in the 0.13 μm CMOS process to verify this DR-enhanced high speed readout technique. The post simulation results indicate that the dynamic range of readout circuit is 99.02 dB and the ADC achieves 60.22 dB SNDR and 9.71 bit ENOB at a conversion rate of 2 MS/s after calibration, with 14.04 dB and 2.4 bit improvement, compared with SNDR and ENOB of that without calibration.
Face recognition using total margin-based adaptive fuzzy support vector machines.
Liu, Yi-Hung; Chen, Yen-Ting
2007-01-01
This paper presents a new classifier called total margin-based adaptive fuzzy support vector machines (TAF-SVM) that deals with several problems that may occur in support vector machines (SVMs) when applied to the face recognition. The proposed TAF-SVM not only solves the overfitting problem resulted from the outlier with the approach of fuzzification of the penalty, but also corrects the skew of the optimal separating hyperplane due to the very imbalanced data sets by using different cost algorithm. In addition, by introducing the total margin algorithm to replace the conventional soft margin algorithm, a lower generalization error bound can be obtained. Those three functions are embodied into the traditional SVM so that the TAF-SVM is proposed and reformulated in both linear and nonlinear cases. By using two databases, the Chung Yuan Christian University (CYCU) multiview and the facial recognition technology (FERET) face databases, and using the kernel Fisher's discriminant analysis (KFDA) algorithm to extract discriminating face features, experimental results show that the proposed TAF-SVM is superior to SVM in terms of the face-recognition accuracy. The results also indicate that the proposed TAF-SVM can achieve smaller error variances than SVM over a number of tests such that better recognition stability can be obtained.
Shen, Meiyu; Russek-Cohen, Estelle; Slud, Eric V
2016-08-12
Bioequivalence (BE) studies are an essential part of the evaluation of generic drugs. The most common in vivo BE study design is the two-period two-treatment crossover design. AUC (area under the concentration-time curve) and Cmax (maximum concentration) are obtained from the observed concentration-time profiles for each subject from each treatment under each sequence. In the BE evaluation of pharmacokinetic crossover studies, the normality of the univariate response variable, e.g. log(AUC) 1 or log(Cmax), is often assumed in the literature without much evidence. Therefore, we investigate the distributional assumption of the normality of response variables, log(AUC) and log(Cmax), by simulating concentration-time profiles from two-stage pharmacokinetic models (commonly used in pharmacokinetic research) for a wide range of pharmacokinetic parameters and measurement error structures. Our simulations show that, under reasonable distributional assumptions on the pharmacokinetic parameters, log(AUC) has heavy tails and log(Cmax) is skewed. Sensitivity analyses are conducted to investigate how the distribution of the standardized log(AUC) (or the standardized log(Cmax)) for a large number of simulated subjects deviates from normality if distributions of errors in the pharmacokinetic model for plasma concentrations deviate from normality and if the plasma concentration can be described by different compartmental models.
Sources of PCR-induced distortions in high-throughput sequencing data sets
Kebschull, Justus M.; Zador, Anthony M.
2015-01-01
PCR permits the exponential and sequence-specific amplification of DNA, even from minute starting quantities. PCR is a fundamental step in preparing DNA samples for high-throughput sequencing. However, there are errors associated with PCR-mediated amplification. Here we examine the effects of four important sources of error—bias, stochasticity, template switches and polymerase errors—on sequence representation in low-input next-generation sequencing libraries. We designed a pool of diverse PCR amplicons with a defined structure, and then used Illumina sequencing to search for signatures of each process. We further developed quantitative models for each process, and compared predictions of these models to our experimental data. We find that PCR stochasticity is the major force skewing sequence representation after amplification of a pool of unique DNA amplicons. Polymerase errors become very common in later cycles of PCR but have little impact on the overall sequence distribution as they are confined to small copy numbers. PCR template switches are rare and confined to low copy numbers. Our results provide a theoretical basis for removing distortions from high-throughput sequencing data. In addition, our findings on PCR stochasticity will have particular relevance to quantification of results from single cell sequencing, in which sequences are represented by only one or a few molecules. PMID:26187991
Solar System Chaos and Orbital Solutions for Paleoclimate Studies: Limits and New Results
NASA Astrophysics Data System (ADS)
Zeebe, R. E.
2017-12-01
I report results from accurate numerical integrations of Solar System orbits over the past 100 Myr. The simulations used different integrator algorithms, step sizes, and initial conditions (NASA, INPOP), and included effects from general relativity, different models of the Moon, the Sun's quadrupole moment, and up to ten asteroids. In one simulation, I probed the potential effect of a hypothetical Planet 9 on the dynamics of the system. The most expensive integration required 4 months wall-clock time (Bulirsch-Stoer algorithm) and showed a maximum relative energy error < 2.5e{-13} over the past 100 Myr. The difference in Earth's eccentricity (DeE) was used to track the difference between two solutions, which were considered to diverge at time tau when DeE irreversibly crossed 10% of Earth's mean eccentricity ( 0.028 x 0.1). My results indicate that finding a unique orbital solution is limited by initial conditions from current ephemerides to 54 Myr. Bizarrely, the 4-month Bulirsch-Stoer integration and a different integration scheme that required only 5 hours wall-clock time (symplectic, 12-day time step, Moon as a simple quadrupole perturbation), agree to 63 Myr. Solutions including 3 and 10 asteroids diverge at tau 48 Myr. The effect of a hypothetical Planet 9 on DeE becomes discernible at 66 Myr. Using tau as a criterion, the current state-of-the-art solutions all differ from previously published results beyond 50 Myr. The current study provides new orbital solutions for application in geological studies. I will also comment on the prospect of constraining astronomical solutions by geologic data.
NASA Astrophysics Data System (ADS)
Galavís, M. E.; Mendoza, C.; Zeippen, C. J.
1998-12-01
Since te[Burgess et al. (1997)]{bur97} have recently questioned the accuracy of the effective collision strength calculated in the IRON Project for the electron impact excitation of the 3ssp23p sp4 \\ sp1 D -sp1 S quadrupole transition in Ar iii, an extended R-matrix calculation has been performed for this transition. The original 24-state target model was maintained, but the energy regime was increased to 100 Ryd. It is shown that in order to ensure convergence of the partial wave expansion at such energies, it is necessary to take into account partial collision strengths up to L=30 and to ``top-up'' with a geometric series procedure. By comparing effective collision strengths, it is found that the differences from the original calculation are not greater than 25% around the upper end of the common temperature range and that they are much smaller than 20% over most of it. This is consistent with the accuracy rating (20%) previously assigned to transitions in this low ionisation system. Also the present high-temperature limit agrees fairly well (15%) with the Coulomb-Born limit estimated by Burgess et al., thus confirming our previous accuracy rating. It appears that Burgess et al., in their data assessment, have overextended the low-energy behaviour of our reduced effective collision strength to obtain an extrapolated high-temperature limit that appeared to be in error by a factor of 2.